Barnes interpolation

Last updated

Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where the Barnes scheme is important is in weather forecasting [1] [2] where measurements are made wherever monitoring stations may be located, the positions of which are constrained by topography. Such interpolation is essential in data visualisation, e.g. in the construction of contour plots or other representations of analytic surfaces.

Contents

Introduction

Barnes proposed an objective scheme for the interpolation of two dimensional data using a multi-pass scheme. [3] [4] This provided a method to interpolating sea-level pressures across the entire United States of America, producing a synoptic chart across the country using dispersed monitoring stations. Researchers have subsequently improved the Barnes method to reduce the number of parameters required for calculation of the interpolated result, increasing the objectivity of the method. [5]

The method constructs a grid of size determined by the distribution of the two dimensional data points. Using this grid, the function values are calculated at each grid point. To do this the method utilises a series of Gaussian functions, given a distance weighting in order to determine the relative importance of any given measurement on the determination of the function values. Correction passes are then made to optimise the function values, by accounting for the spectral response of the interpolated points.

Method

Here we describe the method of interpolation used in a multi-pass Barnes interpolation.

First pass

For a given grid point i, j the interpolated function g(xi, yi) is first approximated by the inverse weighting of the data points. To do this as weighting values is assigned to each Gaussian for each grid point, such that

where is a falloff parameter that controls the width of the Gaussian function. This parameter is controlled by the characteristic data spacing, for a fixed Gaussian cutoff radius wij = e1 giving Δn such that:

The initial interpolation for the function from the measured values then becomes:

Second pass

The correction for the next pass then utilises the difference between the observed field and the interpolated values at the measurement points to optimise the result: [1]

It is worth to note that successive correction steps can be used in order to achieve better agreement between the interpolated function and the measured values at the experimental points.

Parameter selection

Although described as an objective method, there are many parameters which control the interpolated field. The choice of Δn, grid spacing Δx and as well influence the final result. Guidelines for the selection of these parameters have been suggested, [5] however the final values used are free to be chosen within these guidelines.

The data spacing used in the analysis, Δn may be chosen either by calculating the true experimental data inter-point spacing, or by the use of a complete spatial randomness assumption, depending upon the degree of clustering in the observed data. The smoothing parameter is constrained to be between 0.2 and 1.0. For reasons of interpolation integrity, Δx is argued to be constrained between 0.3 and 0.5.

Notes

  1. 1 2 "Objective Rainfall Analysis System". Archived from the original on 22 July 2012. Retrieved 6 May 2009.
  2. Y.Kuleshov; G. de Hoedt; W.Wright & A.Brewster (2002). "Thunderstorm distribution and frequency in Australia". Australian Meteorological Magazine: 145–154.{{cite journal}}: Cite journal requires |journal= (help)
  3. Barnes, S. L (1964). "A technique for maximizing details in numerical weather-map analysis". Journal of Applied Meteorology. 3 (4): 396–409. Bibcode:1964JApMe...3..396B. doi: 10.1175/1520-0450(1964)003<0396:ATFMDI>2.0.CO;2 .
  4. Barnes, S.L (1964). "Mesoscale objective analysis using weighted time-series observations". NOAA Technical Memorandum. National Severe Storms laboratory.{{cite journal}}: Cite journal requires |journal= (help)
  5. 1 2 Koch, S. E.; DesJardins, M & Kocin, P (1983), "An interactive Barnes Objective Map Analysis Scheme for Use with Satellite and Conventional Data", Journal of Climate and Applied Meteorology, 22 (9): 1487–1503, doi:10.1175/1520-0450(1983)022<1487:AIBOMA>2.0.CO;2

Related Research Articles

Interpolation Method for estimating new data within known data points

In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

The Gram–Charlier A series, and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

Rate of convergence Speed of convergence of a mathematical sequence

In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and rate of convergence if

Phase correlation is an approach to estimate the relative translative offset between two similar images or other data sets. It is commonly used in image registration and relies on a frequency-domain representation of the data, usually calculated by fast Fourier transforms. The term is applied particularly to a subset of cross-correlation techniques that isolate the phase information from the Fourier-space representation of the cross-correlogram.

Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the known points.

In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable; when the variates are spatial coordinates, it is also known as spatial interpolation.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

In image analysis, the generalized structure tensor (GST) is an extension of the Cartesian structure tensor to curvilinear coordinates. It is mainly used to detect and to represent the "direction" parameters of curves, just as the Cartesian structure tensor detects and represents the direction in Cartesian coordinates. Curve families generated by pairs of locally orthogonal functions have been the best studied.

Asymmetric Laplace distribution Continuous probability distribution

In probability theory and statistics, the asymmetric Laplace distribution (ALD) is a continuous probability distribution which is a generalization of the Laplace distribution. Just as the Laplace distribution consists of two exponential distributions of equal scale back-to-back about x = m, the asymmetric Laplace consists of two exponential distributions of unequal scale back to back about x = m, adjusted to assure continuity and normalization. The difference of two variates exponentially distributed with different means and rate parameters will be distributed according to the ALD. When the two rate parameters are equal, the difference will be distributed according to the Laplace distribution.

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.

Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing high-order accurate interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant takes the form of a weighted sum of radial basis functions, like for example Gaussian distributions. RBF interpolation is a mesh-free method, meaning the nodes need not lie on a structured grid, and does not require the formation of a mesh. It is often spectrally accurate and stable for large numbers of nodes even in high dimensions.

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies, since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

Kaniadakis statistics is a generalization of the ordinary Boltzmann-Gibbs statistical mechanics, based on a new entropic functional commonly referred to as Kaniadakis entropy or κ-entropy, introduced as the relativistic generalization of the classical Boltzmann-Gibbs-Shannon entropy, by the Greek-Italian physicist Giorgio Kaniadakis in 2001. The κ-statistical mechanics preserves the main features of ordinary statistical mechanics and has attracted the interest of many researchers in the last decades. The κ-distribution is currently considered one of the most viable candidates for explaining complex physical, natural or artificial systems involving power-law tailed statistical distributions. The κ-statistics has been adopted successfully in the description of a variety of systems in the fields of cosmology and astrophysics, condensed matter, quantum physics, seismology, genomics, economy, epidemiology, among many others.

Kaniadakis Gaussian distribution Continuous probability distribution

The Kaniadakis Gaussian distribution is a probability distribution which arises as a generalization of the Gaussian distribution from the maximization of the Kaniadakis entropy under appropriated constraints. It is one example of a Kaniadakis κ-distribution. The κ-Gaussian distribution has been applied successfully for describing several complex systems in economy, geophysics, astrophysics, among many others.

Kaniadakis logistic distribution Probability distribution

The Kaniadakis Logistic distribution is a generalized version of the Logistic distribution associated with the Kaniadakis statistics. It is one example of a Kaniadakis distribution. The κ-Logistic probability distribution describes the population kinetics behavior of bosonic or fermionic character.

In statistics, a Kaniadakis distribution is a statistical distribution that emerges from the Kaniadakis statistics. There are several families of Kaniadakis distributions related to different constraints used in the maximization of the Kaniadakis entropy, such as the κ-Exponential distribution, κ-Gaussian distribution, Kaniadakis κ-Gamma distribution and κ-Weibull distribution. The κ-distributions have been applied for modeling a vast phenomenology of experimental statistical distributions in natural or artificial complex systems, such as, in epidemiology, quantum statistics, in astrophysics and cosmology, in geophysics, in economy, in machine learning.