Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full.
This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed.
CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. [1] [2] The theory has, subsequently, been generalized by Harrington and Mautz for antennas. [3] [4] Harrington, Mautz and their students also successively developed several other extensions of the theory. [5] [6] [7] [8] Even though some precursors [9] were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited [10] in 2007 and, since then, interest in CM has dramatically increased. The subsequent boom of CM theory is reflected by the number of prominent publications and applications.
For simplicity, only the original form of the CM – formulated for perfectly electrically conducting (PEC) bodies in free space — will be treated in this article. The electromagnetic quantities will solely be represented as Fourier's images in frequency domain. Lorenz's gauge is used.
The scattering of an electromagnetic wave on a PEC body is represented via a boundary condition on the PEC body, namely
with representing unitary normal to the PEC surface, representing incident electric field intensity, and representing scattered electric field intensity defined as
with being imaginary unit, being angular frequency, being vector potential
being vacuum permeability, being scalar potential
being vacuum permittivity, being scalar Green's function
and being wavenumber. The integro-differential operator is the one to be diagonalized via characteristic modes.
The governing equation of the CM decomposition is
with and being real and imaginary parts of impedance operator, respectively: The operator, is defined by
The outcome of (1) is a set of characteristic modes , , accompanied by associated characteristic numbers . Clearly, (1) is a generalized eigenvalue problem, which, however, cannot be analytically solved (except for a few canonical bodies [11] ). Therefore, the numerical solution described in the following paragraph is commonly employed.
Discretization of the body of the scatterer into subdomains as and using a set of linearly independent piece-wise continuous functions , , allows current density to be represented as
and by applying the Galerkin method, the impedance operator (2)
The eigenvalue problem (1) is then recast into its matrix form
which can easily be solved using, e.g., the generalized Schur decomposition or the implicitly restarted Arnoldi method yielding a finite set of expansion coefficients and associated characteristic numbers . The properties of the CM decomposition are investigated below.
The properties of CM decomposition are demonstrated in its matrix form.
First, recall that the bilinear forms
and
where superscript denotes the Hermitian transpose and where represents an arbitrary surface current distribution, correspond to the radiated power and the reactive net power, [12] respectively. The following properties can then be easily distilled:
then spans the range of and indicates whether the characteristic mode is capacitive (), inductive (), or in resonance (). In reality, the Rayleigh quotient is limited by the numerical dynamics of the machine precision used and the number of correctly found modes is limited.
This last relation presents the ability of characteristic modes to diagonalize the impedance operator (2) and demonstrates far field orthogonality, i.e.,
The modal currents can be used to evaluate antenna parameters in their modal form, for example:
These quantities can be used for analysis, feeding synthesis, radiator's shape optimization, or antenna characterization.
The number of potential applications is enormous and still growing:
The prospective topics include
CM decomposition has recently been implemented in major electromagnetic simulators, namely in FEKO, [42] CST-MWS, [43] and WIPL-D. [44] Other packages are about to support it soon, for example HFSS [45] and CEM One. [46] In addition, there is a plethora of in-house and academic packages which are capable of evaluating CM and many associated parameters.
CM are useful to understand radiator's operation better. They have been used with great success for many practical purposes. However, it is important to stress that they are not perfect and it is often better to use other formulations such as energy modes, [47] radiation modes, [47] stored energy modes [32] or radiation efficiency modes. [48]
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.
A metamaterial is any material engineered to have a property that is rarely observed in naturally occurring materials. They are made from assemblies of multiple elements fashioned from composite materials such as metals and plastics. These materials are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Metamaterials derive their properties not from the properties of the base materials, but from their newly designed structures. Their precise shape, geometry, size, orientation and arrangement gives them their smart properties capable of manipulating electromagnetic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
In classical electromagnetism, reciprocity refers to a variety of related theorems involving the interchange of time-harmonic electric current densities (sources) and the resulting electromagnetic fields in Maxwell's equations for time-invariant linear media under certain constraints. Reciprocity is closely related to the concept of symmetric operators from linear algebra, applied to electromagnetism.
Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment using computers.
In electromagnetics, directivity is a parameter of an antenna or optical system which measures the degree to which the radiation emitted is concentrated in a single direction. It is the ratio of the radiation intensity in a given direction from the antenna to the radiation intensity averaged over all directions. Therefore, the directivity of a hypothetical isotropic radiator is 1, or 0 dBi.
MUSIC is an algorithm used for frequency estimation and radio direction finding.
A dielectric resonator antenna (DRA) is a radio antenna mostly used at microwave frequencies and higher, that consists of a block of ceramic material of various shapes, the dielectric resonator, mounted on a metal surface, a ground plane. Radio waves are introduced into the inside of the resonator material from the transmitter circuit and bounce back and forth between the resonator walls, forming standing waves. The walls of the resonator are partially transparent to radio waves, allowing the radio power to radiate into space.
In radio, multiple-input and multiple-output (MIMO) is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wireless communication standards including IEEE 802.11n, IEEE 802.11ac, HSPA+ (3G), WiMAX, and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification.
In wireless communication, spatial correlation is the correlation between a signal's spatial direction and the average received signal gain. Theoretically, the performance of wireless communication systems can be improved by having multiple antennas at the transmitter and the receiver. The idea is that if the propagation channels between each pair of transmit and receive antennas are statistically independent and identically distributed, then multiple independent channels with identical characteristics can be created by precoding and be used for either transmitting multiple data streams or increasing the reliability. In practice, the channels between different antennas are often correlated and therefore the potential multi antenna gains may not always be obtainable.
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability.
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.
In applied mathematics, Raviart–Thomas basis functions are vector basis functions used in finite element and boundary element methods. They are regularly used as basis functions when working in electromagnetics. They are sometimes called Rao-Wilton-Glisson basis functions.
Weng Cho Chew is a Malaysian-American electrical engineer and applied physicist known for contributions to wave physics, especially computational electromagnetics. He is a Distinguished Professor of Electrical and Computer Engineering at Purdue University.
Propagation graphs are a mathematical modelling method for radio propagation channels. A propagation graph is a signal flow graph in which vertices represent transmitters, receivers or scatterers. Edges in the graph model propagation conditions between vertices. Propagation graph models were initially developed by Troels Pedersen, et al. for multipath propagation in scenarios with multiple scattering, such as indoor radio propagation. It has later been applied in many other scenarios.
In game theory, a satisfaction equilibrium is a solution concept for a class of non-cooperative games, namely games in satisfaction form. Games in satisfaction form model situations in which players aim at satisfying a given individual constraint, e.g., a performance metric must be smaller or bigger than a given threshold. When a player satisfies its own constraint, the player is said to be satisfied. A satisfaction equilibrium, if it exists, arises when all players in the game are satisfied.
The method of moments (MoM), also known as the moment method and method of weighted residuals, is a numerical method in computational electromagnetics. It is used in computer programs that simulate the interaction of electromagnetic fields such as radio waves with matter, for example antenna simulation programs like NEC that calculate the radiation pattern of an antenna. Generally being a frequency-domain method, it involves the projection of an integral equation into a system of linear equations by the application of appropriate boundary conditions. This is done by using discrete meshes as in finite difference and finite element methods, often for the surface. The solutions are represented with the linear combination of pre-defined basis functions; generally, the coefficients of these basis functions are the sought unknowns. Green's functions and Galerkin method play a central role in the method of moments.