A network is an abstract structure capturing only the basics of connection patterns and little else. Because it is a generalized pattern, tools developed for analyzing, modeling and understanding networks can theoretically be implemented across disciplines. As long as a system can be represented by a network, there is an extensive set of tools – mathematical, computational, and statistical – that are well-developed and if understood can be applied to the analysis of the system of interest.
Tools that are currently employed in risk assessment are often sufficient, but model complexity and limitations of computational power can tether risk assessors to involve more causal connections and account for more Black Swan event outcomes. By applying network theory tools to risk assessment, computational limitations may be overcome and result in broader coverage of events with a narrowed range of uncertainties. [1]
Decision-making processes are not incorporated into routine risk assessments; however, they play a critical role in such processes. [2] It is therefore very important for risk assessors to minimize confirmation bias by carrying out their analysis and publishing their results with minimal involvement of external factors such as politics, media, and advocates. In reality, however, it is nearly impossible to break the iron triangle among politicians, scientists (in this case, risk assessors), and advocates and media. [3] Risk assessors need to be sensitive to the difference between risk studies and risk perceptions. [4] [5] One way to bring the two closer is to provide decision-makers with data they can easily rely on and understand. Employing networks in the risk analysis process can visualize causal relationships and identify heavily-weighted or important contributors to the probability of the critical event. [6]
Bow-tie diagrams, cause-and-effect diagrams, Bayesian networks (a directed acyclic network) and fault trees are few examples of how network theories can be applied in risk assessment. [7]
In epidemiology risk assessments (Figure 7 and 9), once a network model was constructed, we can visually see then quantify and evaluate the potential exposure or infection risk of people related to the well-connected patients (Patient 1, 6, 35, 130 and 127 in Figure 7) or high-traffic places (Hotel M in Figure 9). In ecological risk assessments (Figure 8), through a network model we can identify the keystone species and determine how widespread the impacts will extend from the potential hazards being investigated.
Risk assessment is a method for dealing with uncertainty. For it to be beneficial to the overall risk management and decision making process, it must be able to capture extreme and catastrophic events. Risk assessment involves two parts: risk analysis and risk evaluation, although the term “risk assessment” can be seen used indistinguishable with “risk analysis”. In general, risk assessment can be divided into these steps: [8]
Naturally, the number of steps required varies with each assessment. It depends on the scope of the analysis and the complexity of the study object. [9] Because these is always varies degrees of uncertainty involved in any risk analysis process, sensitivity and uncertainty analysis are usually carried out to mitigate the level of uncertainty and therefore improve the overall risk assessment result.
A network is a simplified representation that reduces a system to an abstract structure. Simply put, it is a collection of points linked together by lines. Each point is known as a “vertex ” (multiple: “vertices”) or “nodes”, and each line as “edges” or “links”. [10] Network modeling and studying have already been applied in many areas, including computer, physical, biological, ecological, logistical and social science. Through the studying of these models, we gain insights into the nature of individual components (i.e. vertices), connections or interactions between those components (i.e. edges), as well as the pattern of connections (i.e. network).
Undoubtedly, modifications of the structure (or pattern) of any given network can have a big effect on the behavior of the system it depicts. For example, connections in a social network affect how people communicate, exchange news, travel, and, less obviously, spread diseases. In order to gain better understanding of how each of these systems functions, some knowledge of the structure of the network is necessary.
Small-World Effect
Degree, Hubs, and Paths
Centrality
Components
Directed Networks
Weighted Network
Trees
Early social network studies can be traced back to the end of the nineteenth century. However well-documented studies and foundation of this field are usually attributed to a psychiatrist named Jacob Moreno. He published a book entitled Who Whall Survive? in 1934 which laid out the foundation for sociometry (later known as social network analysis).
Another famous contributor to the early development of social network analysis is a perimental psychologist known as Stanley Milgram. His "small-world" experiments gave rise to concepts such as six degrees of separation and well-connected acquaintances (also known as "sociometric superstars"). This experiment was recently repeated by Dodds et al. by means of email messages, and the basic results were similar to Milgram's. The estimated true average path length (that is, the number of edges the email message has to pass from one unique individual to the intended targets in different countries) for the experiment was around five to seven, which is not much deviated from the original six degree of separation. [14]
A food web, or food chain, is an example of directed network which describes the prey-predator relationship in a given ecosystem. Vertices in this type of network represent species, and the edges the prey-predator relationship. A collection of species may be represented by a single vertex if all members in that collection prey upon and are preyed on by the same organisms. A food web is often acyclic, with few exceptions such as adults preys on juveniles and parasitism. [15]
Epidemiology is closely related to social network. Contagious diseases can spread through connection networks such as work space, transportation, intimate body contacts and water system (see Figure 7 and 9). Though it only exists virtually, a computer viruses spread across internet networks are not much different from their physical counterparts. Therefore, understanding each of these network patterns can no doubt aid us in more precise prediction of the outcomes of epidemics and preparing better disease prevention protocols.
The simplest model of infection is presented as a SI (susceptible - infected) model. Most diseases, however, do not behave in such simple manner. Therefore, many modifications to this model were made such as the SIR (susceptible – infected – recovered), the SIS (the second S denotes reinfection) and SIRS models. The idea of latency is taken into accounts in models such as SEIR (where E stands for exposed). The SIR model is also known as the Reed-Frost model. [16]
To factor these into an outbreak network model, one must consider the degree distributions of vertices in the giant component of the network (outbreaks in small components are isolation and die out quickly, which does not allow the outbreaks to become epidemics). Theoretically, weighted network can provide more accurate information on exposure probability of vertices but more proofs are needed. Pastor-Satorras et al. pioneered much work in this area, which began with the simplest form (the SI model) and applied to networks drawn from the configuration model. [17]
The biology of how an infection causes disease in an individual is complicated and is another type of disease pattern specialists are interested in (a process known as pathogenesis which involves immunology of the host and virulence factors of the pathogen).
In mathematics, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices which are connected by edges. A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics.
In graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees.
In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges, with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology to information science to computation (scheduling).
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges.
In discrete mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices and each of the related pairs of vertices is called an edge. Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics.
In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph.
In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of an arbitrary directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V + E)).
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research inspired largely by empirical findings of real-world networks such as computer networks, biological networks, technological networks, brain networks, climate networks and social networks.
In the mathematical discipline of graph theory, the dual graph of a plane graph G is a graph that has a vertex for each face of G. The dual graph has an edge for each pair of faces in G that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge e of G has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of e. The definition of the dual depends on the choice of embedding of the graph G, so it is a property of plane graphs rather than planar graphs. For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph.
In the mathematical field of graph theory, a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v, w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, Garey & Ullman (1972), who provided tight bounds on the computational complexity of constructing them.
In graph theory, an edge contraction is an operation that removes an edge from a graph while simultaneously merging the two vertices that it previously joined. Edge contraction is a fundamental operation in the theory of graph minors. Vertex identification is a less restrictive form of this operation.
In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarian mathematicians Paul Erdős and Alfréd Rényi, who introduced one of the models in 1959. Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi. In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model, each edge has a fixed probability of being present or absent, independently of the other edges. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes and the connections between the elements or actors as links. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
In mathematics, and more specifically in graph theory, a directed graph is a graph that is made up of a set of vertices connected by directed edges, often called arcs.
In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has a vertex of degree at most k: that is, some vertex in the subgraph touches k or fewer of the subgraph's edges. The degeneracy of a graph is the smallest value of k for which it is k-degenerate. The degeneracy of a graph is a measure of how sparse it is, and is within a constant factor of other sparsity measures such as the arboricity of a graph.
In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of vertices in a connected graph, there exists at least one shortest path between the vertices such that either the number of edges that the path passes through or the sum of the weights of the edges is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.
In graph theory, an orientation of an undirected graph is an assignment of a direction to each edge, turning the initial graph into a directed graph.