The friendship paradox is the phenomenon first observed by the sociologist Scott L. Feld in 1991 that on average, an individual's friends have more friends than that individual. [1] It can be explained as a form of sampling bias in which people with more friends are more likely to be in one's own friend group. In other words, one is less likely to be friends with someone who has very few friends. In contradiction to this, most people believe that they have more friends than their friends have. [2] [3] [4] [5]
The same observation can be applied more generally to social networks defined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have. [6] [7]
The friendship paradox is an example of how network structure can significantly distort an individual's local observations. [8] [9]
In spite of its apparently paradoxical nature, the phenomenon is real, and can be explained as a consequence of the general mathematical properties of social networks. The mathematics behind this are directly related to the arithmetic-geometric mean inequality and the Cauchy–Schwarz inequality. [10]
Formally, Feld assumes that a social network is represented by an undirected graph G = (V, E), where the set V of vertices corresponds to the people in the social network, and the set E of edges corresponds to the friendship relation between pairs of people. That is, he assumes that friendship is a symmetric relation: if x is a friend of y, then y is a friend of x. The friendship between x and y is therefore modeled by the edge {x, y}, and the number of friends an individual has corresponds to a vertex's degree. The average number of friends of a person in the social network is therefore given by the average of the degrees of the vertices in the graph. That is, if vertex v has d(v) edges touching it (representing a person who has d(v) friends), then the average number μ of friends of a random person in the graph is
The average number of friends that a typical friend has can be modeled by choosing a random person (who has at least one friend), and then calculating how many friends their friends have on average. This amounts to choosing, uniformly at random, an edge of the graph (representing a pair of friends) and an endpoint of that edge (one of the friends), and again calculating the degree of the selected endpoint. The probability of a certain vertex to be chosen is
The first factor corresponds to how likely it is that the chosen edge contains the vertex, which increases when the vertex has more friends. The halving factor simply comes from the fact that each edge has two vertices. So the expected value of the number of friends of a (randomly chosen) friend is
We know from the definition of variance that
where is the variance of the degrees in the graph. This allows us to compute the desired expected value as
For a graph that has vertices of varying degrees (as is typical for social networks), is strictly positive, which implies that the average degree of a friend is strictly greater than the average degree of a random node.
Another way of understanding how the first term came is as follows. For each friendship (u, v), a node u mentions that v is a friend and v has d(v) friends. There are d(v) such friends who mention this. Hence the square of d(v) term. We add this for all such friendships in the network from both the u's and v's perspective, which gives the numerator. The denominator is the number of total such friendships, which is twice the total edges in the network (one from the u's perspective and the other from the v's).
After this analysis, Feld goes on to make some more qualitative assumptions about the statistical correlation between the number of friends that two friends have, based on theories of social networks such as assortative mixing, and he analyzes what these assumptions imply about the number of people whose friends have more friends than they do. Based on this analysis, he concludes that in real social networks, most people are likely to have fewer friends than the average of their friends' numbers of friends. However, this conclusion is not a mathematical certainty; there exist undirected graphs (such as the graph formed by removing a single edge from a large complete graph) that are unlikely to arise as social networks but in which most vertices have higher degree than the average of their neighbors' degrees.
The Friendship Paradox may be restated in graph theory terms as "the average degree of a randomly selected node in a network is less than the average degree of neighbors of a randomly selected node", but this leaves unspecified the exact mechanism of averaging (i.e., macro vs micro averaging). Let be an undirected graph with and , having no isolated nodes. Let the set of neighbors of node be denoted . The average degree is then . Let the number of "friends of friends" of node be denoted . Note that this can count 2-hop neighbors multiple times, but so does Feld's analysis. We have . Feld considered the following "micro average" quantity.
However, there is also the (equally legitimate) "macro average" quantity, given by
The computation of MacroAvg can be expressed as the following pseudocode.
Algorithm MacroAvg
Each edge contributes to MacroAvg the quantity , because . We thus get
Thus, we have both and , but no inequality holds between them. [11]
In a 2023 paper, a parallel paradox, but for negative, antagonistic, or animosity ties, termed the "enmity paradox," was defined and demonstrated by Ghasemian and Christakis. [12] In brief, one's enemies have more enemies than one does, too. This paper also documented diverse phenomena is "mixed worlds" of both hostile and friendly ties.
The analysis of the friendship paradox implies that the friends of randomly selected individuals are likely to have higher than average centrality. This observation has been used as a way to forecast and slow the course of epidemics, by using this random selection process to choose individuals to immunize or monitor for infection while avoiding the need for a complex computation of the centrality of all nodes in the network. [13] [14] [15] In a similar manner, in polling and election forecasting, friendship paradox has been exploited in order to reach and query well-connected individuals who may have knowledge about how numerous other individuals are going to vote. [16] However, when utilized in such contexts, the friendship paradox inevitably introduces bias by over-representing individuals with many friends, potentially skewing resulting estimates. [17] [18]
A study in 2010 by Christakis and Fowler showed that flu outbreaks can be detected almost two weeks before traditional surveillance measures would do so by using the friendship paradox in monitoring the infection in a social network. [19] They found that using the friendship paradox to analyze the health of central friends is "an ideal way to predict outbreaks, but detailed information doesn't exist for most groups, and to produce it would be time-consuming and costly." [20] This extends to the spread of ideas as well, with evidence that the friendship paradox can be used to track and predict the spread of ideas and misinformation through networks. [21] [13] [22] This observation has been explained with the argument that individuals with more social connections may be the driving forces behind the spread of these ideas and beliefs, and as such can be used as early-warning signals. [18]
Friendship paradox based sampling (i.e., sampling random friends) has been theoretically and empirically shown to outperform classical uniform sampling for the purpose of estimating the power-law degree distributions of scale-free networks. [23] [24] The reason is that sampling the network uniformly will not collect enough samples from the characteristic heavy tail part of the power-law degree distribution to properly estimate it. However, sampling random friends incorporates more nodes from the tail of the degree distribution (i.e., more high degree nodes) into the sample. Hence, friendship paradox based sampling captures the characteristic heavy tail of a power-law degree distribution more accurately and reduces the bias and variance of the estimation. [24]
The "generalized friendship paradox" states that the friendship paradox applies to other characteristics as well. For example, one's co-authors are on average likely to be more prominent, with more publications, more citations and more collaborators, [25] [26] [27] or one's followers on Twitter have more followers. [28] The same effect has also been demonstrated for Subjective Well-Being by Bollen et al. (2017), [29] who used a large-scale Twitter network and longitudinal data on subjective well-being for each individual in the network to demonstrate that both a Friendship and a "happiness" paradox can occur in online social networks.
The friendship paradox has also been used as a means to identify structurally influential nodes within social networks, so as to magnify social contagion of diverse practices relevant to human welfare and public health. This has been shown to be possible in several large-scale randomized controlled field trials conducted by Christakis et al., with respect to the adoption of multivitamins [30] or maternal and child health practices [31] [32] in Honduras, or of iron-fortified salt in India. [33] This technique is valuable because, by exploiting the friendship paradox, one can identify such influential nodes without the expense and delay of actually mapping the whole network.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.
In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes.
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
In the study of graphs and networks, the degree of a node in a network is the number of connections it has to other nodes and the degree distribution is the probability distribution of these degrees over the whole network.
In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure p. Real fluids are "sticky" and contain heat. Perfect fluids are idealized models in which these possibilities are ignored. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Quark–gluon plasma is the closest known substance to a perfect fluid.
Assortativity, or assortative mixing, is a preference for a network's nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a node's degree. The addition of this characteristic to network models more closely approximates the behaviors of many real world networks.
In graph theory, a random geometric graph (RGG) is the mathematically simplest spatial network, namely an undirected graph constructed by randomly placing N nodes in some metric space and connecting two nodes by a link if and only if their distance is in a given range, e.g. smaller than a certain neighborhood radius, r.
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes and the connections between the elements or actors as links. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
In a connected graph, closeness centrality of a node is a measure of centrality in a network, calculated as the reciprocal of the sum of the length of the shortest paths between the node and all other nodes in the graph. Thus, the more central a node is, the closer it is to all other nodes.
A hyperbolic geometric graph (HGG) or hyperbolic geometric network (HGN) is a special type of spatial network where (1) latent coordinates of nodes are sprinkled according to a probability density function into a hyperbolic space of constant negative curvature and (2) an edge between two nodes is present if they are close according to a function of the metric (typically either a Heaviside step function resulting in deterministic connections between vertices closer than a certain threshold distance, or a decaying function of hyperbolic distance yielding the connection probability). A HGG generalizes a random geometric graph (RGG) whose embedding space is Euclidean.
Degree Preserving Randomization is a technique used in Network Science that aims to assess whether or not variations observed in a given graph could simply be an artifact of the graph's inherent structural properties rather than properties unique to the nodes, in an observed network.
The Louvain method for community detection is a greedy optimization method intended to extract non-overlapping communities from large networks created by Blondel et al. from the University of Louvain.
Lancichinetti–Fortunato–Radicchibenchmark is an algorithm that generates benchmark networks. They have a priori known communities and are used to compare different community detection methods. The advantage of the benchmark over other methods is that it accounts for the heterogeneity in the distributions of node degrees and of community sizes.
A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.
In network science, the configuration model is a method for generating random networks from a given degree sequence. It is widely used as a reference model for real-life social networks, because it allows the modeler to incorporate arbitrary degree distributions.
Random graph theory of gelation is a mathematical theory for sol–gel processes. The theory is a collection of results that generalise the Flory–Stockmayer theory, and allow identification of the gel point, gel fraction, size distribution of polymers, molar mass distribution and other characteristics for a set of many polymerising monomers carrying arbitrary numbers and types of reactive functional groups.
A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.