Trophic coherence is a property of directed graphs (or directed networks). [1] It is based on the concept of trophic levels used mainly in ecology, [2] but which can be defined for directed networks in general and provides a measure of hierarchical structure among nodes. Trophic coherence is the tendency of nodes to fall into well-defined trophic levels. It has been related to several structural and dynamical properties of directed networks, including the prevalence of cycles [3] and network motifs, [4] ecological stability, [1] intervality, [5] and spreading processes like epidemics and neuronal avalanches. [6]
Consider a directed network defined by the adjacency matrix . Each node can be assigned a trophic level according to
where is 's in-degree, and nodes with (basal nodes) have by convention. Each edge has a trophic difference associated, defined as . The trophic coherence of the network is a measure of how tightly peaked the distribution of trophic distances, , is around its mean value, which is always . This can be captured by an incoherence parameter, equal to the standard deviation of :
where is the number of edges in the network. [1]
The figure shows two networks which differ in their trophic coherence. The position of the nodes on the vertical axis corresponds to their trophic level. In the network on the left, nodes fall into distinct (integer) trophic levels, so the network is maximally coherent . In the one on the right, many of the nodes have fractional trophic levels, and the network is more incoherent . [6]
The degree to which empirical networks are trophically coherent (or incoherent) can be investigated by comparison with a null model. This is provided by the basal ensemble, which comprises networks in which all non-basal nodes have the same proportion of basal nodes for in-neighbours. [3] Expected values in this ensemble converge to those of the widely used configuration ensemble [7] in the limit , (with and the numbers of nodes and edges), and can be shown numerically to be a good approximation for finite random networks. The basal ensemble expectation for the incoherence parameter is
where is the number of edges connected to basal nodes. [3] The ratio measured in empirical networks reveals whether they are more or less coherent than the random expectation. For instance, Johnson and Jones [3] find in a set of networks that food webs are significantly coherent , metabolic networks are significantly incoherent , and gene regulatory networks are close to the random expectation .
There is as yet little understanding of the mechanisms which might lead to particular kinds of networks becoming significantly coherent or incoherent. [3] However, in systems which present correlations between trophic level and other features of nodes, processes which tended to favour the creation of edges between nodes with particular characteristics could induce coherence or incoherence. In the case of food webs, predators tend to specialise on consuming prey with certain biological properties (such as size, speed or behaviour) which correlate with their diet, and hence with trophic level. This has been suggested as the reason for food-web coherence. [1] However, food-web models based on a niche axis do not reproduce realistic trophic coherence, [1] which may mean either that this explanation is insufficient, or that several niche dimensions need to be considered. [8]
The relation between trophic level and node function can be seen in networks other than food webs. The figure shows a word adjacency network derived from the book Green Eggs and Ham , by Dr. Seuss. [3] The height of nodes represents their trophic levels (according here to the edge direction which is the opposite of that suggested by the arrows, which indicate the order in which words are concatenated in sentences). The syntactic function of words is also shown with node colour. There is a clear relationship between syntactic function and trophic level: the mean trophic level of common nouns (blue) is , whereas that of verbs (red) is . This example illustrates how trophic coherence or incoherence might emerge from node function, and also that the trophic structure of networks provides a means of identifying node function in certain systems.
There are various ways of generating directed networks with specified trophic coherence, all based on gradually introducing new edges to the system in such a way that the probability of each new candidate edge being accepted depends on the expected trophic difference it would have.
The preferential preying model is an evolving network model similar to the Barábasi-Albert model of preferential attachment, but inspired on an ecosystem that grows through immigration of new species. [1] One begins with basal nodes and proceeds to introduce new nodes up to a total of . Each new node is assigned a first in-neighbour (a prey species in the food-web context) and a new edge is placed from to . The new node is given a temporary trophic level . Then a further new in-neighbours are chosen for from among those in the network according to their trophic levels. Specifically, for a new candidate in-neighbour , the probability of being chosen is a function of . Johnson et al [1] use
where is a parameter which tunes the trophic coherence: for maximally coherent networks are generated, and increases monotonically with for . The choice of is arbitrary. One possibility is to set to , where is the number of nodes already in the network when arrives, and is a random variable drawn from a Beta distribution with parameters and
( being the desired number of edges). This way, the generalised cascade model [9] [10] is recovered in the limit , and the degree distributions are as in the niche model [11] and generalised niche model. [10] This algorithm, as described above, generates networks with no cycles (except for self-cycles, if the new node is itself considered among its candidate in-neighbours ). In order for cycles of all lengths to be a possible, one can consider new candidate edges in which the new node is the in-neighbour as well as those in which it would be the out-neighbour. The probability of acceptance of these edges, , then depends on .
The generalised preferential preying model [6] is similar to the one described above, but has certain advantages. In particular, it is more analytically tractable, and one can generate networks with a precise number of edges . The network begins with basal nodes, and then a further new nodes are added in the following way. When each enters the system, it is assigned a single in-neighbour randomly from among those already there. Every node then has an integer temporary trophic level . The remaining edges are introduced as follows. Each pair of nodes has two temporary trophic distances associated, and . Each of these candidate edges is accepted with a probability that depends on this temporary distance. Klaise and Johnson [6] use
because they find the distribution of trophic distances in several kinds of networks to be approximately normal, and this choice leads to a range of the parameter in which . Once all the edges have been introduced, one must recalculate the trophic levels of all nodes, since these will differ from the temporary ones originally assigned unless . As with the preferential preying model, the average incoherence parameter of the resulting networks is a monotonically increasing function of for . The figure above shows two networks with different trophic coherence generated with this algorithm.
In the theory of computational complexity, the travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
Loop quantum gravity (LQG) is a theory of quantum gravity that incorporates matter of the Standard Model into the framework established for the intrinsic quantum gravity case. It is an attempt to develop a quantum theory of gravity based directly on Albert Einstein's geometric formulation rather than the treatment of gravity as a mysterious mechanism (force). As a theory, LQG postulates that the structure of space and time is composed of finite loops woven into an extremely fine fabric or network. These networks of loops are called spin networks. The evolution of a spin network, or spin foam, has a scale on the order of a Planck length, approximately 10−35 meters, and smaller scales are meaningless. Consequently, not just matter, but space itself, prefers an atomic structure.
In the mathematical field of Lie theory, a Dynkin diagram, named for Eugene Dynkin, is a type of graph with some edges doubled or tripled. Dynkin diagrams arise in the classification of semisimple Lie algebras over algebraically closed fields, in the classification of Weyl groups and other finite reflection groups, and in other contexts. Various properties of the Dynkin diagram correspond to important features of the associated Lie algebra.
In mathematics, a foliation is an equivalence relation on an n-manifold, the equivalence classes being connected, injectively immersed submanifolds, all of the same dimension p, modeled on the decomposition of the real coordinate space Rn into the cosets x + Rp of the standardly embedded subspace Rp. The equivalence classes are called the leaves of the foliation. If the manifold and/or the submanifolds are required to have a piecewise-linear, differentiable, or analytic structure then one defines piecewise-linear, differentiable, or analytic foliations, respectively. In the most important case of differentiable foliation of class Cr it is usually understood that r ≥ 1. The number p is called the dimension of the foliation and q = n − p is called its codimension.
In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes.
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks are thought to be approximately scale-free and certainly contain few nodes with unusually high degree as compared to the other nodes of the network. The BA model tries to explain the existence of such nodes in real networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes and the connections between the elements or actors as links. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
In fracture mechanics, the energy release rate, , is the rate at which energy is transformed as a material undergoes fracture. Mathematically, the energy release rate is expressed as the decrease in total potential energy per increase in fracture surface area, and is thus expressed in terms of energy per unit area. Various energy balances can be constructed relating the energy released during fracture to the energy of the resulting new surface, as well as other dissipative processes such as plasticity and heat generation. The energy release rate is central to the field of fracture mechanics when solving problems and estimating material properties related to fracture and fatigue.
Modularity is a measure of the structure of networks or graphs which measures the strength of division of a network into modules. Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detecting community structure in networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities.
In graph theory, the Katz centrality or alpha centrality of a node is a measure of centrality in a network. It was introduced by Leo Katz in 1953 and is used to measure the relative degree of influence of an actor within a social network. Unlike typical centrality measures which consider only the shortest path between a pair of actors, Katz centrality measures influence by taking into account the total number of walks between a pair of actors.
In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of vertices in a connected graph, there exists at least one shortest path between the vertices such that either the number of edges that the path passes through or the sum of the weights of the edges is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.
Exponential Random Graph Models (ERGMs) are a family of statistical models for analyzing data from social and other networks. Examples of networks examined using ERGM include knowledge networks, organizational networks, colleague networks, social media networks, networks of scientific development, and others.
In the ADM formulation of general relativity one splits spacetime into spatial slices and time, the basic variables are taken to be the induced metric, , on the spatial slice, and its conjugate momentum variable related to the extrinsic curvature, ,. These are the metric canonical coordinates.
Ashtekar variables, which were a new canonical formalism of general relativity, raised new hopes for the canonical quantization of general relativity and eventually led to loop quantum gravity. Smolin and others independently discovered that there exists in fact a Lagrangian formulation of the theory by considering the self-dual formulation of the Tetradic Palatini action principle of general relativity. These proofs were given in terms of spinors. A purely tensorial proof of the new variables in terms of triads was given by Goldberg and in terms of tetrads by Henneaux et al.
In network theory, multidimensional networks, a special type of multilayer network, are networks with multiple kinds of relations. Increasingly sophisticated attempts to model real-world systems as multidimensional networks have yielded valuable insight in the fields of social network analysis, economics, urban and international transport, ecology, psychology, medicine, biology, commerce, climatology, physics, computational neuroscience, operations management, and finance.
In network science, the efficiency of a network is a measure of how efficiently it exchanges information and it is also called communication efficiency. The underlying idea is that the more distant two nodes are in the network, the less efficient their communication will be. The concept of efficiency can be applied to both local and global scales in a network. On a global scale, efficiency quantifies the exchange of information across the whole network where information is concurrently exchanged. The local efficiency quantifies a network's resistance to failure on a small scale. That is the local efficiency of a node characterizes how well information is exchanged by its neighbors when it is removed.
The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis by Paul W. Holland et al. The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally by assuming uniform probability distribution among all paths in a given graph.
A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)