Part of a series on | ||||
Network science | ||||
---|---|---|---|---|
Network types | ||||
Graphs | ||||
| ||||
Models | ||||
| ||||
| ||||
In graph theory, the Katz centrality or alpha centrality of a node is a measure of centrality in a network. It was introduced by Leo Katz in 1953 and is used to measure the relative degree of influence of an actor (or node) within a social network. [1] Unlike typical centrality measures which consider only the shortest path (the geodesic) between a pair of actors, Katz centrality measures influence by taking into account the total number of walks between a pair of actors. [2]
It is similar to Google's PageRank and to the eigenvector centrality. [3]
Katz centrality computes the relative influence of a node within a network by measuring the number of the immediate neighbors (first degree nodes) and also all other nodes in the network that connect to the node under consideration through these immediate neighbors. Connections made with distant neighbors are, however, penalized by an attenuation factor . [4] Each path or connection between a pair of nodes is assigned a weight determined by and the distance between nodes as .
For example, in the figure on the right, assume that John's centrality is being measured and that . The weight assigned to each link that connects John with his immediate neighbors Jane and Bob will be . Since Jose connects to John indirectly through Bob, the weight assigned to this connection (composed of two links) will be . Similarly, the weight assigned to the connection between Agneta and John through Aziz and Jane will be and the weight assigned to the connection between Agneta and John through Diego, Jose and Bob will be .
Let A be the adjacency matrix of a network under consideration. Elements of A are variables that take a value 1 if a node i is connected to node j and 0 otherwise. The powers of A indicate the presence (or absence) of links between two nodes through intermediaries. For instance, in matrix , if element , it indicates that node 2 and node 12 are connected through some walk of length 3. If denotes Katz centrality of a node i, then, given a value , mathematically:
Note that the above definition uses the fact that the element at location of reflects the total number of degree connections between nodes and . The value of the attenuation factor has to be chosen such that it is smaller than the reciprocal of the absolute value of the largest eigenvalue of A. [5] In this case the following expression can be used to calculate Katz centrality:
Here is the identity matrix, is a vector of size n (n is the number of nodes) consisting of ones. denotes the transposed matrix of A and denotes matrix inversion of the term . [5]
An extension of this framework allows for the walks to be computed in a dynamical setting. [6] [7] By taking a time dependent series of network adjacency snapshots of the transient edges, the dependency for walks to contribute towards a cumulative effect is presented. The arrow of time is preserved so that the contribution of activity is asymmetric in the direction of information propagation.
Network producing data of the form:
representing the adjacency matrix at each time . Hence:
The time points are ordered but not necessarily equally spaced. for which is a weighted count of the number of dynamic walks of length from node to node . The form for the dynamic communicability between participating nodes is:
This can be normalized via:
Therefore, centrality measures that quantify how effectively node can 'broadcast' and 'receive' dynamic messages across the network:
Given a graph with adjacency matrix , Katz centrality is defined as follows:
where is the external importance given to node , and is a nonnegative attenuation factor which must be smaller than the inverse of the spectral radius of . The original definition by Katz [8] used a constant vector . Hubbell [9] introduced the usage of a general .
Half a century later, Bonacich and Lloyd [10] defined alpha centrality as:
which is essentially identical to Katz centrality. More precisely, the score of a node differs exactly by , so if is constant the order induced on the nodes is identical.
Katz centrality can be used to compute centrality in directed networks such as citation networks and the World Wide Web. [11]
Katz centrality is more suitable in the analysis of directed acyclic graphs where traditionally used measures like eigenvector centrality are rendered useless. [11]
Katz centrality can also be used in estimating the relative status or influence of actors in a social network. The work presented in [12] shows the case study of applying a dynamic version of the Katz centrality to data from Twitter and focuses on particular brands which have stable discussion leaders. The application allows for a comparison of the methodology with that of human experts in the field and how the results are in agreement with a panel of social media experts.
In neuroscience, it is found that Katz centrality correlates with the relative firing rate of neurons in a neural network. [13] The temporal extension of the Katz centrality is applied to fMRI data obtained from a musical learning experiment in [14] where data is collected from the subjects before and after the learning process. The results show that the changes to the network structure over the musical exposure created in each session a quantification of the cross communicability that produced clusters in line with the success of learning.
A generalized form of Katz centrality can be used as an intuitive ranking system for sports teams, such as in college football. [15]
Alpha centrality is implemented in igraph library for network analysis and visualization. [16]
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric tensor on M consists of a metric tensor at each point p of M that varies smoothly with p.
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.
In mathematics and physics, n-dimensional anti-de Sitter space (AdSn) is a maximally symmetric Lorentzian manifold with constant negative scalar curvature. Anti-de Sitter space and de Sitter space are named after Willem de Sitter (1872–1934), professor of astronomy at Leiden University and director of the Leiden Observatory. Willem de Sitter and Albert Einstein worked together closely in Leiden in the 1920s on the spacetime structure of the universe. Paul Dirac was the first person to rigorously explore anti-de Sitter space, doing so in 1963.
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written very concisely, and allows for the quantization of the electromagnetic field by Lagrangian formulation described below.
Quantum tomography or quantum state tomography is the process by which a quantum state is reconstructed using measurements on an ensemble of identical quantum states. The source of these states may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about the state. Such a set of observations is sometimes called a quorum. The term tomography was first used in the quantum physics literature in a 1993 paper introducing experimental optical homodyne tomography.
In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk measure is a function that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance.
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.
In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes and the connections between the elements or actors as links. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability.
An electric dipole transition is the dominant effect of an interaction of an electron in an atom with the electromagnetic field.
In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
Chronological calculus is a formalism for the analysis of flows of non-autonomous dynamical systems. It was introduced by A. Agrachev and R. Gamkrelidze in the late 1970s. The scope of the formalism is to provide suitable tools to deal with non-commutative vector fields and represent their flows as infinite Volterra series. These series, at first introduced as purely formal expansions, are then shown to converge under some suitable assumptions.
A set of networks that satisfies given structural characteristics can be treated as a network ensemble. Brought up by Ginestra Bianconi in 2007, the entropy of a network ensemble measures the level of the order or uncertainty of a network ensemble.
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.