Radial basis function network

Last updated

In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment. [1] [2] [3]

Contents

Network architecture

Architecture of a radial basis function network. An input vector
x
{\displaystyle x}
is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions. Rbf-network.svg
Architecture of a radial basis function network. An input vector is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.

Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by

where is the number of neurons in the hidden layer, is the center vector for neuron , and is the weight of neuron in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition [4] [5] [ editorializing ]) and the radial basis function is commonly taken to be Gaussian

.

The Gaussian basis functions are local to the center vector in the sense that

i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.

Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of . [6] This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.

The parameters , , and are determined in a manner that optimizes the fit between and the data.

Two unnormalized radial basis functions in one input dimension. The basis function centers are located at
c
1
=
0.75
{\displaystyle c_{1}=0.75}
and
c
2
=
3.25
{\displaystyle c_{2}=3.25}
. Unnormalized radial basis functions.svg
Two unnormalized radial basis functions in one input dimension. The basis function centers are located at and .

Normalized

Normalized radial basis functions.svg
Two normalized radial basis functions in one input dimension (sigmoids). The basis function centers are located at and .
3 Normalized radial basis functions.svg
Three normalized radial basis functions in one input dimension. The additional basis function has center at .
4 Normalized radial basis functions.svg
Four normalized radial basis functions in one input dimension. The fourth basis function has center at . Note that the first basis function (dark blue) has become localized.

Normalized architecture

In addition to the above unnormalized architecture, RBF networks can be normalized. In this case the mapping is

where

is known as a normalized radial basis function.

Theoretical motivation for normalization

There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density

where the weights and are exemplars from the data and we require the kernels to be normalized

and

.

The probability densities in the input and output spaces are

and

The expectation of y given an input is

where

is the conditional probability of y given . The conditional probability is related to the joint probability through Bayes theorem

which yields

.

This becomes

when the integrations are performed.

Local linear models

It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,

and

in the unnormalized and normalized cases, respectively. Here are weights to be determined. Higher order linear terms are also possible.

This result can be written

where

and

in the unnormalized case and

in the normalized case.

Here is a Kronecker delta function defined as

.

Training

RBF networks are typically trained from pairs of input and target values , by a two-step algorithm.

In the first step, the center vectors of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using k-means clustering. Note that this step is unsupervised.

The second step simply fits a linear model with coefficients to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function:

where

.

We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.

There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as

where

and

where optimization of S maximizes smoothness and is known as a regularization parameter.

A third optional backpropagation step can be performed to fine-tune all of the RBF net's parameters. [3]

Interpolation

RBF networks can be used to interpolate a function when the values of that function are known on finite number of points: . Taking the known points to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points the weights can be solved from the equation

It can be shown that the interpolation matrix in the above equation is non-singular, if the points are distinct, and thus the weights can be solved by simple linear algebra:

where .

Function approximation

If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.

Training the basis function centers

Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers.

The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.

Pseudoinverse solution for the linear weights

After the centers have been fixed, the weights that minimize the error at the output can be computed with a linear pseudoinverse solution:

,

where the entries of G are the values of the radial basis functions evaluated at the points : .

The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).

Gradient descent training of the linear weights

Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),

where is a "learning parameter."

For the case of training the linear weights, , the algorithm becomes

in the unnormalized case and

in the normalized case.

For local-linear-architectures gradient-descent training is

Projection operator training of the linear weights

For the case of training the linear weights, and , the algorithm becomes

in the unnormalized case and

in the normalized case and

in the local-linear case.

For one basis function, projection operator training reduces to Newton's method.

Figure 6: Logistic map time series. Repeated iteration of the logistic map generates a chaotic time series. The values lie between zero and one. Displayed here are the 100 training points used to train the examples in this section. The weights c are the first five points from this time series. 060731 logistic map time series 2.png
Figure 6: Logistic map time series. Repeated iteration of the logistic map generates a chaotic time series. The values lie between zero and one. Displayed here are the 100 training points used to train the examples in this section. The weights c are the first five points from this time series.

Examples

Logistic map

The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype for chaotic time series. The map, in the fully chaotic regime, is given by

where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.

Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate

for f.

Function approximation

Unnormalized radial basis functions

The architecture is

Figure 7: Unnormalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. 060728b unnormalized basis function phi.png
Figure 7: Unnormalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set.

where

.

Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 5. The weights are five exemplars from the time series. The weights are trained with projection operator training:

where the learning rate is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15.

Figure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case. Normalized basis functions.png
Figure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case.

Normalized radial basis functions

The normalized RBF architecture is

where

.

Again:

.

Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 6. The weights are five exemplars from the time series. The weights are trained with projection operator training:

where the learning rate is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.

Figure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characteristic of chaotic time series. Chaotic Time Series Prediction.svg
Figure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characteristic of chaotic time series.

Time series prediction

Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:

.

A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.

Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.

Control of a chaotic time series

Figure 10: Control of the logistic map. The system is allowed to evolve naturally for 49 time steps. At time 50 control is turned on. The desired trajectory for the time series is red. The system under control learns the underlying dynamics and drives the time series to the desired output. The architecture is the same as for the time series prediction example. 060808 control of logistic map.svg
Figure 10: Control of the logistic map. The system is allowed to evolve naturally for 49 time steps. At time 50 control is turned on. The desired trajectory for the time series is red. The system under control learns the underlying dynamics and drives the time series to the desired output. The architecture is the same as for the time series prediction example.

We assume the output of the logistic map can be manipulated through a control parameter such that

.

The goal is to choose the control parameter in such a way as to drive the time series to a desired output . This can be done if we choose the control parameter to be

where

is an approximation to the underlying natural dynamics of the system.

The learning algorithm is given by

where

.

See also

Related Research Articles

<span class="mw-page-title-main">Gradient</span> Multivariate derivative (mathematics)

In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:

<span class="mw-page-title-main">Spherical coordinate system</span> Coordinates comprising a distance and two angles

In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a given point in space is specified by three numbers, : the radial distance of the radial liner connecting the point to the fixed point of origin ; the polar angle θ of the radial line r; and the azimuthal angle φ of the radial line r.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

<span class="mw-page-title-main">Cylindrical coordinate system</span> 3-dimensional coordinate system

A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis (axis L in the image opposite), the direction from the axis relative to a chosen reference direction (axis A), and the distance from a chosen reference plane perpendicular to the axis (plane containing the purple section). The latter distance is given as a positive or negative number depending on which side of the reference plane faces the point.

In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.

<span class="mw-page-title-main">Hamiltonian mechanics</span> Formulation of classical mechanics using momenta

In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.

Electron density or electronic density is the measure of the probability of an electron being present at an infinitesimal element of space surrounding any given point. It is a scalar quantity depending upon three spatial variables and is typically denoted as either or . The density is determined, through definition, by the normalised -electron wavefunction which itself depends upon variables. Conversely, the density determines the wave function modulo up to a phase factor, providing the formal foundation of density functional theory.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.

In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.

<span class="mw-page-title-main">Multiple integral</span> Generalization of definite integrals to functions of multiple variables

In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Physical (natural philosophy) interpretation: S any surface, V any volume, etc.. Incl. variable to time, position, etc.

Ewald summation, named after Paul Peter Ewald, is a method for computing long-range interactions in periodic systems. It was first developed as the method for calculating the electrostatic energies of ionic crystals, and is now commonly used for calculating long-range interactions in computational chemistry. Ewald summation is a special case of the Poisson summation formula, replacing the summation of interaction energies in real space with an equivalent summation in Fourier space. In this method, the long-range interaction is divided into two parts: a short-range contribution, and a long-range contribution which does not have a singularity. The short-range contribution is calculated in real space, whereas the long-range contribution is calculated using a Fourier transform. The advantage of this method is the rapid convergence of the energy compared with that of a direct summation. This means that the method has high accuracy and reasonable speed when computing long-range interactions, and it is thus the de facto standard method for calculating long-range interactions in periodic systems. The method requires charge neutrality of the molecular system to accurately calculate the total Coulombic interaction. A study of the truncation errors introduced in the energy and force calculations of disordered point-charge systems is provided by Kolafa and Perram.

In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.

<span class="mw-page-title-main">Mathematical descriptions of the electromagnetic field</span> Formulations of electromagnetism

There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.

The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

References

  1. Broomhead, D. S.; Lowe, David (1988). Radial basis functions, multi-variable functional interpolation and adaptive networks (Technical report). RSRE. 4148. Archived from the original on April 9, 2013.
  2. Broomhead, D. S.; Lowe, David (1988). "Multivariable functional interpolation and adaptive networks" (PDF). Complex Systems. 2: 321–355. Archived (PDF) from the original on 2020-12-01. Retrieved 2019-01-29.
  3. 1 2 Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). "Three learning phases for radial-basis-function networks". Neural Networks. 14 (4–5): 439–458. CiteSeerX   10.1.1.109.312 . doi:10.1016/s0893-6080(01)00027-2. PMID   11411631.
  4. Beheim, Larbi; Zitouni, Adel; Belloir, Fabien (January 2004). "New RBF neural network classifier with optimized hidden neurons number".
  5. Ibrikci, Turgay; Brandt, M.E.; Wang, Guanyu; Acikkar, Mustafa (23–26 October 2002). Mahalanobis distance with radial basis function network on protein secondary structures. Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society. Engineering in Medicine and Biology Society, Proceedings of the Annual International Conference of the IEEE. Vol. 3. Houston, TX, USA (published 6 January 2003). pp. 2184–5. doi:10.1109/IEMBS.2002.1053230. ISBN   0-7803-7612-9. ISSN   1094-687X.
  6. Park, J.; I. W. Sandberg (Summer 1991). "Universal Approximation Using Radial-Basis-Function Networks". Neural Computation. 3 (2): 246–257. doi:10.1162/neco.1991.3.2.246. PMID   31167308. S2CID   34868087.

Further reading