In mathematics and telecommunications, stochastic geometry models of wireless networks refer to mathematical models based on stochastic geometry that are designed to represent aspects of wireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory, as well as methods from more general mathematical disciplines such as geometry, probability theory, stochastic processes, queueing theory, information theory, and Fourier analysis. [1] [2] [3] [4]
In the early 1960s a stochastic geometry model [5] was developed to study wireless networks. This model is considered to be pioneering and the origin of continuum percolation. [6] Network models based on geometric probability were later proposed and used in the late 1970s [7] and continued throughout the 1980s [8] [9] for examining packet radio networks. Later their use increased significantly for studying a number of wireless network technologies including mobile ad hoc networks, sensor networks, vehicular ad hoc networks, cognitive radio networks and several types of cellular networks, such as heterogeneous cellular networks. [10] [11] [12] Key performance and quality of service quantities are often based on concepts from information theory such as the signal-to-interference-plus-noise ratio, which forms the mathematical basis for defining network connectivity and coverage. [4] [11]
The principal idea underlying the research of these stochastic geometry models, also known as random spatial models, [10] is that it is best to assume that the locations of nodes or the network structure and the aforementioned quantities are random in nature due to the size and unpredictability of users in wireless networks. The use of stochastic geometry can then allow for the derivation of closed-form or semi-closed-form expressions for these quantities without resorting to simulation methods or (possibly intractable or inaccurate) deterministic models. [10]
The discipline of stochastic geometry entails the mathematical study of random objects defined on some (often Euclidean) space. In the context of wireless networks, the random objects are usually simple points (which may represent the locations of network nodes such as receivers and transmitters) or shapes (for example, the coverage area of a transmitter) and the Euclidean space is either 3-dimensional, or more often, the (2-dimensional) plane, which represents a geographical region. In wireless networks (for example, cellular networks) the underlying geometry (the relative locations of nodes) plays a fundamental role due to the interference of other transmitters, whereas in wired networks (for example, the Internet) the underlying geometry is less important.
A wireless network can be seen as a collection of (information theoretic) channels sharing space and some common frequency band. Each channel consists of a set of transmitters trying to send data to a set of receivers. The simplest channel is the point-to-point channel which involves a single transmitter aiming at sending data to a single receiver. The broadcast channel, in information theory terminology, [13] is the one-to-many situation with a single transmitter aiming at sending different data to different receivers and it arises in, for example, the downlink of a cellular network. [14] The multiple access channel is the converse, with several transmitters aiming at sending different data to a single receiver. [13] This many-to-one situation arises in, for example, the uplink of cellular networks. [14] Other channel types exist such as the many-to-many situation. These (information theoretic) channels are also referred to as network links, many of which will be simultaneously active at any given time.
There are number of examples of geometric objects that can be of interest in wireless networks. For example, consider a collection of points in the Euclidean plane. For each point, place in the plane a disk with its center located at the point. The disks are allowed to overlap with each other and the radius of each disk is random and (stochastically) independent of all the other radii. The mathematical object consisting of the union of all these disks is known as a Boolean (random disk) model [4] [15] [16] and may represent, for example, the sensing region of a sensor network. If all the radii are not random, but common positive constant, then the resulting model is known as the Gilbert disk (Boolean) model. [17]
Instead of placing disks on the plane, one may assign a disjoint (or non-overlapping) subregion to each node. Then the plane is partitioned into a collection of disjoint subregions. For example, each subregion may consist of the collection of all the locations of this plane that are closer to some point of the underlying point pattern than any other point of the point pattern. This mathematical structure is known as a Voronoi tessellation and may represent, for example, the association cells in a cellular network where users associate with the closest base station.
Instead of placing a disk or a Voronoi cell on a point, one could place a cell defined from the information theoretic channels described above. For instance, the point-to-point channel cell of a point was defined [18] as the collection of all the locations of the plane where a receiver could sustain a point-to-point channel with a certain quality from a transmitter located at this point. This, given that the other point is also an active transmitter, is a point-to-point channel in its own right.
In each case, the fact that the underlying point pattern is random (for example, a point process) or deterministic (for example, a lattice of points) or some combination of both, will influence the nature of the Boolean model, the Voronoi tessellation, and other geometrical structures such as the point-to-point channel cells constructed from it.
In wired communication, the field of information theory (in particular, the Shannon–Hartley theorem) motivates the need for studying the signal-to-noise ratio (SNR). In a wireless communication, when a collection of channels is active at the same time, the interference from the other channels is considered as noise, which motivates the need for the quantity known as the signal-to-interference-plus-noise ratio (SINR). For example, if we have a collection of point-to-point channels, the SINR of the channel of a particular transmitter–receiver pair is defined as:
where S is the power, at the receiver, of the incoming signal from said transmitter, I is the combined power of all the other (interfering) transmitters in the network, and N is the power of some thermal noise term. The SINR reduces to SNR when there is no interference (i.e. I = 0). In networks where the noise is negligible, also known as "interference limited" networks, we N = 0, which gives the signal-to-interference ratio (SIR).
A common goal of stochastic geometry wireless network models is to derive expressions for the SINR or for the functions of the SINR which determine coverage (or outage) and connectivity. For example, the concept of the outage probability pout, which is informally the probability of not being able to successfully send a signal on a channel, is made more precise in the point-to-point case by defining it as the probability that the SINR of a channel is less than or equal to some network-dependent threshold. [19] The coverage probability pc is then the probability that the SINR is larger than the SINR threshold. In short, given a SINR threshold t, the outage and coverage probabilities are given by
and
One aim of the stochastic geometry models is to derive the probability laws of the Shannon channel capacity or rate of a typical channel when taking into account the interference created by all other channels.
In the point-to-point channel case, the interference created by other transmitters is considered as noise, and when this noise is Gaussian, the law of the typical Shannon channel capacity is then determined by that of the SINR through Shannon's formula (in bits per second):
where B is the bandwidth of the channel in hertz. In other words, there is a direct relationship between the coverage or outage probability and the Shannon channel capacity. The problem of determining the probability distribution of C under such a random setting has been studied in several types of wireless network architectures or types.
In general, the use of methods from the theories of probability and stochastic processes in communication systems has a long and interwoven history stretching back over a century to the pioneering teletraffic work of Agner Erlang. [20] In the setting of stochastic geometry models, Edgar Gilbert [5] in the 1960s proposed a mathematical model for wireless networks, now known as a Gilbert disk model, [17] that gave rise to the field of continuum percolation theory, which in turn is a generalization of discrete percolation. [6] Starting in the late 1970s, Leonard Kleinrock and others used wireless models based on Poisson processes to study packet forward networks. [7] [8] [9] This work would continue until the 1990s where it would cross paths with the work on shot noise.
The general theory and techniques of stochastic geometry and, in particular, point processes have often been motivated by the understanding of a type of noise that arises in electronic systems known as shot noise. For certain mathematical functions of a point process, a standard method for finding the average (or expectation) of the sum of these functions is Campbell's formula [4] [21] or theorem, [22] which has its origins in the pioneering work by Norman R. Campbell on shot noise over a century ago. [23] [24] Much later in the 1960s, Gilbert alongside Henry Pollak studied the shot noise process [25] formed from a sum of response functions of a Poisson process and identically distributed random variables. The shot noise process inspired more formal mathematical work in the field of point processes, [26] [27] often involving the use of characteristic functions, and would later be used for models of signal interference from other nodes in the network.
Around the early 1990s, shot noise based on a Poisson process and a power-law repulse function was studied and observed to have a stable distribution. [28] Independently, researchers [19] [29] successfully developed Fourier and Laplace transform techniques for the interference experienced by a user in a wireless network in which the locations of the (interfering) nodes or transmitters are positioned according to a Poisson process. It was independently shown again that Poisson shot noise, now as a model for interference, has a stable distribution [29] by use of characteristic functions or, equivalently, Laplace transforms, which are often easier to work with than the corresponding probability distributions. [1] [2] [30]
Moreover, the assumption of the received (i.e. useful) signal power being exponentially distributed (for example, due to Rayleigh fading) and the Poisson shot noise (for which the Laplace is known) allows for explicit closed-form expression for the coverage probability based on the SINR. [19] [31] This observation helps to explain why the Rayleigh fading assumption is frequently made when constructing stochastic geometry models. [1] [2] [4]
Later in the early 2000s researchers started examining the properties of the regions under SINR coverage in the framework of stochastic geometry and, in particular, coverage processes. [18] Connectivity in terms of the SINR was studied using techniques from continuum percolation theory. More specifically, the early results of Gilbert were generalized to the setting of the SINR case. [32] [33]
A wireless network consists of nodes (each of which is a transmitter, receiver or both, depending on the system) that produce, relay or consume data within the network. For example, base stations and users in a cellular phone network or sensor nodes in a sensor network. Before developing stochastic geometry wireless models, models are required for mathematically representing the signal propagation and the node positioning. The propagation model captures how signals propagate from transmitters to receivers. The node location or positioning model (idealizes and) represents the positions of the nodes as a point process. The choice of these models depends on the nature of the wireless network and its environment. The network type depends on such factors as the specific architecture (for instance cellular) and the channel or medium access control (MAC) protocol, which controls the channels and, hence, the communicating structures of the network. In particular, to prevent the collision of transmissions in the network, the MAC protocol dictates, based on certain rules, when transmitter-receiver pairs can access the network both in time and space, which also affects the active node positioning model.
Suitable and manageable models are needed for the propagation of electromagnetic signals (or waves) through various media, such as air, taking into account multipath propagation (due to reflection, refraction, diffraction and dispersion) caused by signals colliding with obstacles such as buildings. The propagation model is a building block of the stochastic geometry wireless network model. A common approach is to consider propagation models with two separate parts consisting of the random and deterministic (or non-random) components of signal propagation.
The deterministic component is usually represented by some path-loss or attenuation function that uses the distance propagated by the signal (from its source) for modeling the power decay of electromagnetic signals. The distance-dependent path-loss function may be a simple power-law function (for example, the Hata model), a fast-decaying exponential function, some combination of both, or another decreasing function. Owing to its tractability, models have often incorporated the power-law function
where the path-loss exponent α > 2, and |x − y| denotes the distance between point y and the signal source at point x.
The random component seeks to capture certain types of signal fading associated with absorption and reflections by obstacles. The fading models in use include Rayleigh (implying exponential random variables for the power), log-normal, Rice, and Nakagami distributions.
Both the deterministic and random components of signal propagation are usually considered detrimental to the overall performance of a wireless network.
An important task in stochastic geometry network models is choosing a mathematical model for the location of the network nodes. The standard assumption is that the nodes are represented by (idealized) points in some space (often Euclidean Rn, and even more often in the plane R2), which means they form a stochastic or random structure known as a (spatial) point process. [10]
A number of point processes have been suggested to model the positioning of wireless network nodes. Among these, the most frequently used is the Poisson process, which gives a Poisson network model. [10] The Poisson process in general is commonly used as a mathematical model across numerous disciplines due to its highly tractable and well-studied nature. [15] [22] It is often assumed that the Poisson process is homogeneous (implying it is a stationary process) with some constant node density λ. For a Poisson process in the plane, this implies that the probability of having n points or nodes in a bounded region B is given by
where |B| is the area of B and n! denotes n factorial. The above equation quickly extends to the R3 case by replacing the area term with a volume term.
The mathematical tractability or ease of working with Poisson models is mostly because of its 'complete independence', which essentially says that two (or more) disjoint (or non-overlapping) bounded regions respectively contain two (or more) a Poisson number of points that are independent to each other. This important property characterizes the Poisson process and is often used as its definition. [22]
The complete independence or `randomness' [35] property of Poisson processes leads to some useful characteristics and results of point process operations such as the superposition property: the superposition of Poisson processes with densities λ1 to λn is another Poisson process with density
Furthermore, randomly thinning a Poisson process (with density λ), where each point is independently removed (or kept) with some probability p (or 1 − p), forms another Poisson process (with density (1 − p)λ) while the kept points also form a Poisson process (with density pλ) that is independent to the Poisson process of removed points. [15] [22]
These properties and the definition of the homogeneous Poisson process extend to the case of the inhomogeneous (or non-homogeneous) Poisson process, which is a non-stationary stochastic process with a location-dependent density λ(x) where x is a point (usually in the plane, R2). For more information, see the articles on the Poisson process.
Despite its simplifying nature, the independence property of the Poisson process has been criticized for not realistically representing the configuration of deployed networks. [34] For example, it does not capture node "repulsion" where two (or more) nodes in a wireless network may not be normally placed (arbitrarily) close to each other (for examples, base stations in a cellular network). In addition to this, MAC protocols often induce correlations or non-Poisson configurations into the geometry of the simultaneously active transmitter pattern. Strong correlations also arise in the case of cognitive radio networks where secondary transmitters are only allowed to transmit if they are far from primary receivers. To answer these and other criticisms, a number of point processes have been suggested to represent the positioning of nodes including the binomial process, cluster processes, Matérn hard-core processes, [2] [4] [36] [37] and Strauss and Ginibre processes. [10] [38] [39] For example, Matérn hard-core processes are constructed by dependently thinning a Poisson point process. The dependent thinning is done in way such that for any point in the resulting hard-core process, there are no other points within a certain set radius of it, thus creating a "hard-core" around each point in the process. [4] [15] On the other hand, soft-core processes have point repulsion that ranges somewhere between the hard-core processes and Poisson processes (which have no repulsion). More specifically, the probability of a point existing near another point in a soft-core point process decreases in some way as it approaches the other point, thus creating a "soft-core" around each point where other points can exist, but are less likely.
Although models based on these and other point processes come closer to resembling reality in some situations, for example in the configuration of cellular base stations, [34] [40] they often suffer from a loss of tractability while the Poisson process greatly simplifies the mathematics and techniques, explaining its continued use for developing stochastic geometry models of wireless networks. [10] Also, it has been shown that the SIR distribution of non-Poisson cellular networks can be closely approximated by applying a horizontal shift to the SIR distribution of a Poisson network. [41]
The type of network model is a combination of factors such as the network architectural organization (cellular, ad hoc, cognitive radio), the medium access control (MAC) protocol being used, the application running on it, and whether the network is mobile or static.
Around the beginning of the 21st century a number of new network technologies have arisen including mobile ad hoc networks and sensor networks. Stochastic geometry and percolation techniques have been used to develop models for these networks. [2] [42] The increases in user traffic has resulted in stochastic geometry being applied to cellular networks. [43]
Poisson bipolar network model is a type of stochastic geometry model based on the Poisson process and is an early example of a model for mobile ad hoc networks (MANETs), [2] [31] [44] which are a self-organizing wireless communication network in which mobile devices rely on no infrastructure (base stations or access points). In MANET models, the transmitters form a random point process and each transmitter has its receiver located at some random distance and orientation. The channels form a collection of transmitter-receiver pairs or "bipoles"; the signal of a channel is that transmitted over the associated bipole, whereas the interference is that created by all other transmitters than that of the bipole. The approach of considering the transmitters-receive bipoles led to the development and analysis of one of the Poisson bipolar network model. The choice of the medium access probability, which maximizes the mean number of successful transmissions per unit space, was in particular derived in. [31]
A wireless sensor network consists of a spatially distributed collection of autonomous sensor nodes. Each node is designed to monitor physical or environmental conditions, such as temperature, sound, pressure, eetc. and to cooperatively relay the collected data through the network to a main location. In unstructured sensor networks, [45] the deployment of nodes may be done in a random manner. A chief performance criterion of all sensor networks is the ability of the network to gather data, which motivates the need to quantify the coverage or sensing area of the network. It is also important to gauge the connectivity of the network or its capability of relaying the collected data back to the main location.
The random nature of unstructured sensors networks has motivated the use of stochastic geometry methods. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity. [42] [46] One model that is used to study to these networks and wireless networks in general is the Poisson–Boolean model, which is a type of coverage process from continuum percolation theory.
One of the main limitations of sensor networks is energy consumption where usually each node has a battery and, perhaps, an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a sub-collection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Rudimentary power-saving models have been proposed such as the simple uncoordinated or decentralized "blinking" model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a new type model referred to as a blinking Boolean-Poisson model, was proposed to analyze the latency and connectivity performance of sensor networks with such sleep schemes. [42]
A cellular network is a radio network distributed over some region with subdivisions called cells, each served by at least one fixed-location transceiver, known as a cell base station. In cellular networks, each cell uses a different set of frequencies from neighboring cells, to mitigate interference and provide higher bandwidth within each cell. The operators of cellular networks need to known certain performance or quality of service (QoS) metrics in order to dimension the networks, which means adjusting the density of the deployed base stations to meet the demand of user traffic for a required QoS level.
In cellular networks, the channel from the users (or phones) to the base station(s) is known as the uplink channel. Conversely, the downlink channel is from the base station(s) to the users. The downlink channel is the most studied with stochastic geometry models while models for the uplink case, which is a more difficult problem, are starting to be developed. [47]
In the downlink case, the transmitters and the receivers can be considered as two separate point processes. In the simplest case, there is one point-to-point channel per receiver (i.e. the user), and for a given receiver, this channel is from the closest transmitter (i.e. the base station) to the receiver. Another option consists in selecting the transmitter with the best signal power to the receiver. In any case, there may be several channels with the same transmitter.
A first approach for analyzing cellular networks is to consider the typical user, who can be assumed to be located anywhere on the plane. Under the assumption of point process ergodicity (satisfied when using homogeneous Poisson processes), the results for the typical user correspond to user averages. The coverage probability of the typical user is then interpreted as the proportion of network users who can connect to the cellular network.
Building off previous work done on an Aloha model, [44] the coverage probability for the typical user was derived for a Poisson network. [43] [48] The Poisson model of a cellular network proves to be more tractable than a hexagonal model. [43] Meanwhile, this observation could be argued by the fact that a detailed and precise derivation for the channel attenuation probability distribution function between a random node and a reference base-station for a hexagonal model was explicitly derived in; [49] and this result could be used to tractably derive the outage probability.
In the presence of sufficiently strong and independent log-normal shadow fading (or shadowing) and a singular power-law attenuation function, it was observed by simulation [50] for hexagonal networks and then later mathematically proved [51] [52] that for general stationary (including hexagonal) networks that quantities like the SINR and SIR of the typical user behave stochastically as though the underlying network were Poisson. In other words, given a path loss function, using a Poisson cellular network model with constant shadowing is equivalent (in terms of SIR, SINR, etc.) to assuming sufficiently large and independent fading or shadowing in the mathematical model with the base stations positioned according to either a deterministic or random configuration with a constant density.
The results were originally derived for log-shadowing, but then extended to a large family of fading and shadowing models [52] For log- normal shadowing, it has also been mathematically shown that the wireless networks can still appear Poisson if there some correlation among the shadowing. [53]
In the context of cellular networks, a heterogeneous network (sometimes known as a HetNet) is a network that uses several types of base stations macro-base stations, pico-base stations, and/or femto-base stations in order to provide better coverage and bit rates. This is in particular used to cope with the difficulty of covering with macro-base stations only open outdoor environment, office buildings, homes, and underground areas. Recent Poisson-based models have been developed to derive the coverage probability of such networks in the downlink case. [54] [55] [56] The general approach is to have a number or layers or "tiers"' of networks which are then combined or superimposed onto each other into one heterogeneous or multi-tier network. If each tier is a Poisson network, then the combined network is also a Poisson network owing to the superposition characteristic of Poisson processes. [22] Then the Laplace transform for this superimposed Poisson model is calculated, leading to the coverage probability in (the downlink channel) of a cellular network with multiple tiers when a user is connected to the instantaneously strongest base station [54] and when a user is connected to the strongest base station on average (not including small scale fading). [55]
In recent years the model formulating approach of considering a "typical user" in cellular (or other) networks has been used considerably. This is, however, just a first approach which allows one to characterize only the spectral efficiency (or information rate) of the network. In other words, this approach captures the best possible service that can be given to a single user who does not need to share wireless network resources with other users.
Models beyond the typical user approach have been proposed with the aim of analyzing QoS metrics of a population of users, and not just a single user. Broad speaking, these models can be classified into four types: static, semi-static, semi-dynamic and (fully) dynamic. [57] More specifically:
The ultimate goal when constructing these models consists of relating the following three key network parameters: user traffic demand per surface unit, network density and user QoS metric(s). These relations form part of the network dimensioning tools, which allow the network operators to appropriately vary the density of the base stations to meet the traffic demands for a required performance level.
The MAC protocol controls when transmitters can access the wireless medium. The aim is to reduce or prevent collisions by limiting the power of interference experienced by an active receiver. The MAC protocol determines the pattern of simultaneously active channels, given the underlying pattern of available channels. Different MAC protocols hence perform different thinning operations on the available channels, which results in different stochastic geometry models being needed.
A slotted Aloha wireless network employs the Aloha MAC protocol where the channels access the medium, independently at each time interval, with some probability p. [2] If the underlying channels (that is, their transmitters for the point-to-point case) are positioned according to a Poisson process (with density λ), then the nodes accessing the network also form a Poisson network (with density pλ), which allows the use of the Poisson model. ALOHA is not only one of the simplest and most classic MAC protocol but also was shown to achieve Nash equilibria when interpreted as a power control schemes. [71]
Several early stochastic models of wireless networks were based on Poisson point processes with the aim of studying the performance of slotted Aloha. [7] [72] [73] Under Rayleigh fading and the power-law path-loss function, outage (or equivalently, coverage) probability expressions were derived by treating the interference term as a shot noise and using Laplace transforms models, [19] [74] which was later extended to a general path-loss function, [31] [44] [75] and then further extended to a pure or non-slotted Aloha case. [76]
The carrier-sense multiple access (CSMA) MAC protocol controls the network in such a way that channels close to each other never access the medium simultaneously. When applied to a Poisson point process, this was shown to naturally lead to a Matérn-like hard-core (or soft-core in the case of fading) point process which exhibits the desired "repulsion". [2] [36] The probability for a channel to be scheduled is known in closed-form, as well as the so-called pair-correlation function of the point process of scheduled nodes. [2]
In a network with code-division multiple access (CDMA) MAC protocol, each transmitter modulates its signal by a code that is orthogonal to that of the other signals, and which is known to its receiver. This mitigates the interference from other transmitters, and can be represented in a mathematical model by multiplying the interference by an orthogonality factor. Stochastic geometry models based on this type of representation were developed to analyze the coverage areas of transmitters positioned according to a Poisson process. [18]
In the previous MAC-based models, point-to-point channels were assumed and the interference was considered as noise. In recent years, models have been developed to study more elaborate channels arising from the discipline of network information theory. [77] More specifically, a model was developed for one of the simplest settings: a collection of transmitter-receiver pairs represented as a Poisson point process. [78] In this model, the effects of an interference reduction scheme involving "point-to-point codes" were examined. These codes, consisting of randomly and independently generated codewords, give transmitters-receivers permission when to exchange information, thus acting as a MAC protocol. Furthermore, in this model a collection or "party" of channels was defined for each such pair. This party is a multiple access channel, [77] namely the many-to-one situation for channels. The receiver of the party is the same as that of the pair, and the transmitter of the pair belongs to the set of transmitters of the party, together with other transmitters. Using stochastic geometry, the probability of coverage was derived as well as the geometric properties of the coverage cells. [78] It was also shown [77] that when using the point-to-point codes and simultaneous decoding, the statistical gain obtained over a Poisson configuration is arbitrarily large compared to the scenario where interference is treated as noise.
Stochastic geometry wireless models have been proposed for several network types including cognitive radio networks, [79] [80] relay networks, [81] and vehicular ad hoc networks.
For further reading of stochastic geometry wireless network models, see the textbook by Haenggi, [4] the two-volume text by Baccelli and Błaszczyszyn [1] [2] (available online), and the survey article. [11] For interference in wireless networks, see the monograph on interference by Ganti and Haenggi [30] (available online). For an introduction to stochastic geometry and spatial statistics in a more general setting, see the lectures notes by Baddeley [21] (available online with Springer subscription). For a complete and rigorous treatment of point processes, see the two-volume text by Daley and Vere-Jones [35] [82] (available online with Springer subscription).
A wireless network is a computer network that uses wireless data connections between network nodes. Wireless networking allows homes, telecommunications networks, and business installations to avoid the costly process of introducing cables into a building, or as a connection between various equipment locations. Admin telecommunications networks are generally implemented and administered using radio communication. This implementation takes place at the physical level (layer) of the OSI model network structure.
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
A cognitive radio (CR) is a radio that can be programmed and configured dynamically to use the best channels in its vicinity to avoid user interference and congestion. Such a radio automatically detects available channels, then accordingly changes its transmission or reception parameters to allow more concurrent wireless communications in a given band at one location. This process is a form of dynamic spectrum management.
In telecommunications, a diversity scheme refers to a method for improving the reliability of a message signal by using two or more communication channels with different characteristics. Diversity is mainly used in radio communication and is a common technique for combatting fading and co-channel interference and avoiding error bursts. It is based on the fact that individual channels experience fades and interference at different, random times, i.e., they are at least partly independent. Multiple versions of the same signal may be transmitted and/or received and combined in the receiver. Alternatively, a redundant forward error correction code may be added and different parts of the message transmitted over different channels. Diversity techniques may exploit the multipath propagation, resulting in a diversity gain, often measured in decibels.
Radio resource management (RRM) is the system level management of co-channel interference, radio resources, and other radio transmission characteristics in wireless communication systems, for example cellular networks, wireless local area networks, wireless sensor systems, and radio broadcasting networks. RRM involves strategies and algorithms for controlling parameters such as transmit power, user allocation, beamforming, data rates, handover criteria, modulation scheme, error coding scheme, etc. The objective is to utilize the limited radio-frequency spectrum resources and radio network infrastructure as efficiently as possible.
Precoding is a generalization of beamforming to support multi-stream transmission in multi-antenna wireless communications. In conventional single-stream beamforming, the same signal is emitted from each of the transmit antennas with appropriate weighting such that the signal power is maximized at the receiver output. When the receiver has multiple antennas, single-stream beamforming cannot simultaneously maximize the signal level at all of the receive antennas. In order to maximize the throughput in multiple receive antenna systems, multi-stream transmission is generally required.
Multi-user MIMO (MU-MIMO) is a set of multiple-input and multiple-output (MIMO) technologies for multipath wireless communication, in which multiple users or terminals, each radioing over one or more antennas, communicate with one another. In contrast, single-user MIMO (SU-MIMO) involves a single multi-antenna-equipped user or terminal communicating with precisely one other similarly equipped node. Analogous to how OFDMA adds multiple-access capability to OFDM in the cellular-communications realm, MU-MIMO adds multiple-user capability to MIMO in the wireless realm.
In radio, cooperative multiple-input multiple-output is a technology that can effectively exploit the spatial domain of mobile fading channels to bring significant performance improvements to wireless communication systems. It is also called network MIMO, distributed MIMO, virtual MIMO, and virtual antenna arrays.
In mathematics, stochastic geometry is the study of random spatial patterns. At the heart of the subject lies the study of random point patterns. This leads to the theory of spatial point processes, hence notions of Palm conditioning, which extend to the more abstract setting of random measures.
In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. There also exist equations involving moment measures and factorial moment measures that are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory of point processes and queueing theory as well as the related fields stochastic geometry, continuum percolation theory, and spatial statistics.
In mathematics and probability theory, continuum percolation theory is a branch of mathematics that extends discrete percolation theory to continuous space. More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type of point process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components. As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components. Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study of random graphs and random geometric graphs.
In information theory and telecommunication engineering, the signal-to-interference-plus-noise ratio (SINR) is a quantity used to give theoretical upper bounds on channel capacity in wireless communication systems such as networks. Analogous to the signal-to-noise ratio (SNR) used often in wired communications systems, the SINR is defined as the power of a certain signal of interest divided by the sum of the interference power and the power of some background noise. If the power of noise term is zero, then the SINR reduces to the signal-to-interference ratio (SIR). Conversely, zero interference reduces the SINR to the SNR, which is used less often when developing mathematical models of wireless networks such as cellular networks.
In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both.
In probability and statistics, a point process operation or point process transformation is a type of mathematical operation performed on a random object known as a point process, which are often used as mathematical models of phenomena that can be represented as points randomly located in space. These operations can be purely random, deterministic or both, and are used to construct new point processes, which can be then also used as mathematical models. The operations may include removing or thinning points from a point process, combining or superimposing multiple point processes into one point process or transforming the underlying space of the point process into another space. Point process operations and the resulting point processes are used in the theory of point processes and related fields such as stochastic geometry and spatial statistics.
In probability and statistics, a moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of (raw) moments of random variables, hence arise often in the study of point processes and related fields.
In probability and statistics, a spherical contact distribution function, first contact distribution function, or empty space function is a mathematical function that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, a spherical contact distribution function is defined as probability distribution of the radius of a sphere when it first encounters or makes contact with a point in a point process. This function can be contrasted with the nearest neighbour function, which is defined in relation to some point in the point process as being the probability distribution of the distance from that point to its nearest neighbouring point in the same point process.
In probability theory, statistics and related fields, a Poisson point process is a type of mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The process's name derives from the fact that the number of points in any given finite region follows a Poisson distribution. The process and the distribution are named after French mathematician Siméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments on radioactive decay, telephone call arrivals and actuarial science.
In probability and statistics, a nearest neighbor function, nearest neighbor distance distribution, nearest-neighbor distribution function or nearest neighbor distribution is a mathematical function that is defined in relation to mathematical objects known as point processes, which are often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, nearest neighbor functions are defined with respect to some point in the point process as being the probability distribution of the distance from this point to its nearest neighboring point in the same point process, hence they are used to describe the probability of another point existing within some distance of a point. A nearest neighbor function can be contrasted with a spherical contact distribution function, which is not defined in reference to some initial point but rather as the probability distribution of the radius of a sphere when it first encounters or makes contact with a point of a point process.
François Louis Baccelli is senior researcher at INRIA Paris, in charge of the ERC project NEMO on network mathematics.
In the domain of wireless communication, air-to-ground channels (A2G) are used for linking airborne devices, such as drones and aircraft, with terrestrial communication equipment. These channels are instrumental in a wide array of applications, extending beyond commercial telecommunications — including important roles in 5G and forthcoming 6G networks, where aerial base stations are integral to Non-Terrestrial Networks — to encompass critical uses in emergency response, environmental monitoring, military communications, and the expanding domain of the internet of things (IoT). A comprehensive understanding of A2G channels, their operational mechanics, and distinct attributes is essential for the enhancement of wireless network performance.