# Interval arithmetic

Last updated

Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods using interval arithmetic can guarantee reliable, mathematically correct results. Instead of representing a value as a single number, interval arithmetic represents each value as a range of possibilities. For example, instead of estimating the height of someone as exactly 2.0 metres, using interval arithmetic one might be certain that that person is somewhere between 1.97 and 2.03 metres.

## Contents

Mathematically, instead of working with an uncertain real ${\displaystyle x}$, one works with the ends of an interval ${\displaystyle [a,b]}$ that contains ${\displaystyle x}$. In interval arithmetic, any variable ${\displaystyle x}$ lies in the closed interval between ${\displaystyle a}$ and ${\displaystyle b}$. A function ${\displaystyle f}$, when applied to ${\displaystyle x}$, yields an uncertain result; ${\displaystyle f}$ produces an interval ${\displaystyle [c,d]}$ which includes all the possible values for ${\displaystyle f(x)}$ for all ${\displaystyle x\in [a,b]}$.

Interval arithmetic is suitable for a variety of purposes. The most common use is in software, to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems.

## Introduction

The main objective of interval arithmetic is a simple way to calculate upper and lower bounds for the range of a function in one or more variables. These endpoints are not necessarily the true supremum or infimum, since the precise calculation of those values can be difficult or impossible; the bounds need only contain the function's range as a subset.

This treatment is typically limited to real intervals, so quantities of form

${\displaystyle [a,b]=\{x\in \mathbb {R} \,|\,a\leq x\leq b\},}$

where ${\displaystyle a={-\infty }}$ and ${\displaystyle b={\infty }}$ are allowed. With one of ${\displaystyle a}$, ${\displaystyle b}$ infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number ${\displaystyle r}$ can be interpreted as the interval ${\displaystyle [r,r],}$ intervals and real numbers can be freely combined.

As with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined. [1] More complicated functions can be calculated from these basic elements. [1]

### Example

As an example, consider the calculation of body mass index (BMI) and assessing whether a person is overweight. BMI is calculated as a person's body weight in kilograms divided by the square of their height in metres. A bathroom scale may have a resolution of one kilogram. Intermediate values cannot be discerned—79.6 kg and 80.3 kg are indistinguishable, for example—but the true weight is rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, the person weighs exactly 80.0 kg. In normal rounding to the nearest value, the scale's showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. This corresponds with the interval ${\displaystyle [79.5,80.5]}$.

For a man who weighs 80 kg and is 1.80 m tall, the BMI is approximately 24.7. A weight of 79.5 kg and the same height yields approx. 24.537, while a weight of 80.5 kg yields approx. 24.846. Since the function is monotonically increasing, we conclude that the true BMI is in the range ${\displaystyle [24.537,24.846]}$. Since the entire range is less than 25, which is the cutoff between normal and excessive weight, we conclude that the man is of normal weight.

The error in this case does not affect the conclusion (normal weight), but this is not always the case. If the man was slightly heavier, the BMI's range may include the cutoff value of 25. In that case, the scale's precision was insufficient to make a definitive conclusion.

Also, note that the range of BMI examples could be reported as ${\displaystyle [24.5,24.9]}$, since this interval is a superset of the calculated interval. The range could not, however, be reported as ${\displaystyle [24.6,24.8]}$, as now the interval does not contain possible BMI values.

Interval arithmetic states the range of possible outcomes explicitly. Results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to error bars in expressing the extent of uncertainty.

#### Multiple intervals

Height and body weight both affect the value of the BMI. We have already treated weight as an uncertain measurement, but height is also subject to uncertainty. Height measurements in metres are usually rounded to the nearest centimeter: a recorded measurement of 1.79 metres actually means a height in the interval ${\displaystyle [1.785,1.795]}$. Now, all four combinations of possible height/weight values must be considered. Using the interval methods described below, the BMI lies in the interval

${\displaystyle {\frac {[79.5,80.5]}{[1.785,1.795]^{2}}}\subseteq [24.673,25.266].}$

In this case, the man may have a normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. This demonstrates interval arithmetic's ability to correctly track and propagate error.

## Interval operators

A binary operation ${\displaystyle \star }$ on two intervals, such as addition or multiplication, is defined by

${\displaystyle [x_{1},x_{2}]{\,\star \,}[y_{1},y_{2}]=\{x\star y\,|\,x\in [x_{1},x_{2}]\,\land \,y\in [y_{1},y_{2}]\}.}$

In other words, it is the set of all possible values of ${\displaystyle x\star y}$, where ${\displaystyle x}$ and ${\displaystyle y}$ are in their corresponding intervals. If ${\displaystyle \star }$ is monotone in each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains ${\displaystyle 0}$), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is

${\displaystyle [x_{1},x_{2}]\star [y_{1},y_{2}]=\left[\min\{x_{1}\star y_{1},x_{1}\star y_{2},x_{2}\star y_{1},x_{2}\star y_{2}\},\max\{x_{1}\star y_{1},x_{1}\star y_{2},x_{2}\star y_{1},x_{2}\star y_{2}\}\right],}$

provided that ${\displaystyle x\star y}$ is defined for all ${\displaystyle x\in [x_{1},x_{2}]}$ and ${\displaystyle y\in [y_{1},y_{2}]}$.

For practical applications this can be simplified further:

• Addition: ${\displaystyle [x_{1},x_{2}]+[y_{1},y_{2}]=[x_{1}+y_{1},x_{2}+y_{2}]}$
• Subtraction: ${\displaystyle [x_{1},x_{2}]-[y_{1},y_{2}]=[x_{1}-y_{2},x_{2}-y_{1}]}$
• Multiplication: ${\displaystyle [x_{1},x_{2}]\cdot [y_{1},y_{2}]=[\min\{x_{1}y_{1},x_{1}y_{2},x_{2}y_{1},x_{2}y_{2}\},\max\{x_{1}y_{1},x_{1}y_{2},x_{2}y_{1},x_{2}y_{2}\}]}$
• Division:
${\displaystyle {\frac {[x_{1},x_{2}]}{[y_{1},y_{2}]}}=[x_{1},x_{2}]\cdot {\frac {1}{[y_{1},y_{2}]}},}$
where
{\displaystyle {\begin{aligned}{\frac {1}{[y_{1},y_{2}]}}&=\left[{\tfrac {1}{y_{2}}},{\tfrac {1}{y_{1}}}\right]&&\mathrm {if} \;0\notin [y_{1},y_{2}]\\{\frac {1}{[y_{1},0]}}&=\left[-\infty ,{\tfrac {1}{y_{1}}}\right]\\{\frac {1}{[0,y_{2}]}}&=\left[{\tfrac {1}{y_{2}}},\infty \right]\\{\frac {1}{[y_{1},y_{2}]}}&=\left[-\infty ,{\tfrac {1}{y_{1}}}\right]\cup \left[{\tfrac {1}{y_{2}}},\infty \right]\subseteq [-\infty ,\infty ]&&\mathrm {if} \;0\in (y_{1},y_{2})\end{aligned}}}

The last case loses useful information about the exclusion of ${\displaystyle (1/y_{1},1/y_{2})}$. Thus, it is common to work with ${\displaystyle \left[-\infty ,{\tfrac {1}{y_{1}}}\right]}$ and ${\displaystyle \left[{\tfrac {1}{y_{2}}},\infty \right]}$ as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form ${\textstyle \bigcup _{i}\left[a_{i},b_{i}\right].}$ The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite. [2]

Interval multiplication often only requires two multiplications. If ${\displaystyle x_{1}}$, ${\displaystyle y_{1}}$ are nonnegative,

${\displaystyle [x_{1},x_{2}]\cdot [y_{1},y_{2}]=[x_{1}\cdot y_{1},x_{2}\cdot y_{2}],\qquad {\text{ if }}x_{1},y_{1}\geq 0.}$

The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from smallest to the largest.

With the help of these definitions, it is already possible to calculate the range of simple functions, such as ${\displaystyle f(a,b,x)=a\cdot x+b.}$ For example, if ${\displaystyle a=[1,2]}$, ${\displaystyle b=[5,7]}$ and ${\displaystyle x=[2,3]}$:

${\displaystyle f(a,b,x)=([1,2]\cdot [2,3])+[5,7]=[1\cdot 2,2\cdot 3]+[5,7]=[7,13]}$.

### Notation

To make the notation of intervals smaller in formulae, brackets can be used.

${\displaystyle [x]\equiv [x_{1},x_{2}]}$ can be used to represent an interval. Note that in such a compact notation, ${\displaystyle [x]}$ should not be confused between a single-point interval ${\displaystyle [x_{1},x_{1}]}$ and a general interval. For the set of all intervals, we can use

${\displaystyle [\mathbb {R} ]:=\left\{\,[x_{1},x_{2}]\,|\,x_{1}\leq x_{2}{\text{ and }}x_{1},x_{2}\in \mathbb {R} \cup \{-\infty ,\infty \}\right\}}$

as an abbreviation. For a vector of intervals ${\displaystyle \left([x]_{1},\ldots ,[x]_{n}\right)\in [\mathbb {R} ]^{n}}$ we can use a bold font: ${\displaystyle [\mathbf {x} ]}$.

### Elementary functions

Interval functions beyond the four basic operators may also be defined.

For monotonic functions in one variable, the range of values is simple to compute. If ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ is monotonically increasing (resp. decreasing) in the interval ${\displaystyle [x_{1},x_{2}],}$ then for all ${\displaystyle y_{1},y_{2}\in [x_{1},x_{2}]}$ such that ${\displaystyle y_{1}\leq y_{2},}$${\displaystyle f(y_{1}) (resp. ${\displaystyle f(y_{2})).

The range corresponding to the interval ${\displaystyle [y_{1},y_{2}]\subseteq [x_{1},x_{2}]}$ can be therefore calculated by applying the function to its endpoints:

${\displaystyle f([y_{1},y_{2}])=\left[\min \left\{f(y_{1}),f(y_{2})\right\},\max \left\{f(y_{1}),f(y_{2})\right\}\right].}$

From this, the following basic features for interval functions can easily be defined:

• Exponential function: ${\displaystyle a^{[x_{1},x_{2}]}=[a^{x_{1}},a^{x_{2}}]}$ for ${\displaystyle a>1,}$
• Logarithm: ${\displaystyle \log _{a}[x_{1},x_{2}]=[\log _{a}{x_{1}},\log _{a}{x_{2}}]}$ for positive intervals ${\displaystyle [x_{1},x_{2}]}$ and ${\displaystyle a>1,}$
• Odd powers: ${\displaystyle [x_{1},x_{2}]^{n}=[x_{1}^{n},x_{2}^{n}]}$, for odd ${\displaystyle n\in \mathbb {N} .}$

For even powers, the range of values being considered is important, and needs to be dealt with before doing any multiplication. For example, ${\displaystyle x^{n}}$ for ${\displaystyle x\in [-1,1]}$ should produce the interval ${\displaystyle [0,1]}$ when ${\displaystyle n=2,4,6,\ldots .}$ But if ${\displaystyle [-1,1]^{n}}$ is taken by repeating interval multiplication of form ${\displaystyle [-1,1]\cdot [-1,1]\cdot \cdots \cdot [-1,1]}$ then the result is ${\displaystyle [-1,1],}$ wider than necessary.

More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints ${\displaystyle x_{1}}$, ${\displaystyle x_{2}}$of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at ${\displaystyle \left({\tfrac {1}{2}}+n\right)\pi }$ or ${\displaystyle n\pi }$ for ${\displaystyle n\in \mathbb {Z} }$, respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is ${\displaystyle [-1,1]}$ if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely -1, 0, and 1.

### Interval extensions of general functions

In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$ is a function from a real vector to a real number, then  ${\displaystyle [f]:[\mathbb {R} ]^{n}\rightarrow [\mathbb {R} ]}$ is called an interval extension of ${\displaystyle f}$ if

${\displaystyle [f]([\mathbf {x} ])\supseteq \{f(\mathbf {y} )|\mathbf {y} \in [\mathbf {x} ]\}}$.

This definition of the interval extension does not give a precise result. For example, both ${\displaystyle [f]([x_{1},x_{2}])=[e^{x_{1}},e^{x_{2}}]}$ and ${\displaystyle [g]([x_{1},x_{2}])=[{-\infty },{\infty }]}$ are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, ${\displaystyle [f]}$ should be chosen as it gives the tightest possible result.

Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions and operators.

The Taylor interval extension (of degree ${\displaystyle k}$ ) is a ${\displaystyle k+1}$ times differentiable function ${\displaystyle f}$ defined by

${\displaystyle [f]([\mathbf {x} ]):=f(\mathbf {y} )+\sum _{i=1}^{k}{\frac {1}{i!}}\mathrm {D} ^{i}f(\mathbf {y} )\cdot ([\mathbf {x} ]-\mathbf {y} )^{i}+[r]([\mathbf {x} ],[\mathbf {x} ],\mathbf {y} )}$,

for some ${\displaystyle \mathbf {y} \in [\mathbf {x} ]}$, where ${\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )}$ is the ${\displaystyle i}$th order differential of ${\displaystyle f}$ at the point ${\displaystyle \mathbf {y} }$ and ${\displaystyle [r]}$ is an interval extension of the Taylor remainder

${\displaystyle r(\mathbf {x} ,\xi ,\mathbf {y} )={\frac {1}{(k+1)!}}\mathrm {D} ^{k+1}f(\xi )\cdot (\mathbf {x} -\mathbf {y} )^{k+1}.}$

The vector ${\displaystyle \xi }$ lies between ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {y} }$ with ${\displaystyle \mathbf {x} ,\mathbf {y} \in [\mathbf {x} ]}$, ${\displaystyle \xi }$ is protected by ${\displaystyle [\mathbf {x} ]}$. Usually one chooses ${\displaystyle \mathbf {y} }$ to be the midpoint of the interval and uses the natural interval extension to assess the remainder.

The special case of the Taylor interval extension of degree ${\displaystyle k=0}$ is also referred to as the mean value form.

## Complex interval arithmetic

An interval can also be defined as a locus of points at a given distance from the centre,[ clarification needed ] and this definition can be extended from real numbers to complex numbers. [3] As it is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. [4] Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. [4]

The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. [4] It can be shown that, as it is the case with real interval arithmetic, there is no distributivity between addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. [4] Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates. [4]

Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic. [4]

## Interval methods

The methods of classical numerical analysis can not be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.

### Rounded interval arithmetic

To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available. The range of values of the function ${\displaystyle f(x,y)=x+y}$ for ${\displaystyle x\in [0.1,0.8]}$ and ${\displaystyle y\in [0.06,0.08]}$ are for example ${\displaystyle [0.16,0.88]}$. Where the same calculation is done with single digit precision, the result would normally be ${\displaystyle [0.2,0.9]}$. But ${\displaystyle [0.2,0.9]\not \supseteq [0.16,0.88]}$, so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of ${\displaystyle f([0.1,0.8],[0.06,0.08])}$ would be lost. Instead, the outward rounded solution ${\displaystyle [0.1,0.9]}$ is used.

The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e. up), or rounding towards negative infinity (i.e. down).

The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval ${\displaystyle [\varepsilon _{1},\varepsilon _{2}]}$ can be added.

### Dependency problem

The so-called dependency problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently then this can lead to an unwanted expansion of the resulting intervals.

As an illustration, take the function ${\displaystyle f}$ defined by ${\displaystyle f(x)=x^{2}+x.}$ The values of this function over the interval ${\displaystyle [-1,1]}$ are ${\displaystyle \left[-{\tfrac {1}{4}},2\right].}$ As the natural interval extension, it is calculated as:

${\displaystyle [-1,1]^{2}+[-1,1]=[0,1]+[-1,1]=[-1,2],}$

which is slightly larger; we have instead calculated the infimum and supremum of the function ${\displaystyle h(x,y)=x^{2}+y}$ over ${\displaystyle x,y\in [-1,1].}$ There is a better expression of ${\displaystyle f}$ in which the variable ${\displaystyle x}$ only appears once, namely by rewriting ${\displaystyle f(x)=x^{2}+x}$ as addition and squaring in the quadratic

${\displaystyle f(x)=\left(x+{\frac {1}{2}}\right)^{2}-{\frac {1}{4}}.}$

So the suitable interval calculation is

${\displaystyle \left([-1,1]+{\frac {1}{2}}\right)^{2}-{\frac {1}{4}}=\left[-{\frac {1}{2}},{\frac {3}{2}}\right]^{2}-{\frac {1}{4}}=\left[0,{\frac {9}{4}}\right]-{\frac {1}{4}}=\left[-{\frac {1}{4}},2\right]}$

and gives the correct values.

In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if ${\displaystyle f}$ is continuous inside the box. However, not every function can be rewritten this way.

The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.

An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system

${\displaystyle {\begin{cases}x=p\\y=p\end{cases}}\qquad p\in [-1,1]}$

is precisely the line between the points ${\displaystyle (-1,-1)}$ and ${\displaystyle (1,1).}$ Using interval methods results in the unit square, ${\displaystyle [-1,1]\times [-1,1].}$ This is known as the wrapping effect .

### Linear interval systems

A linear interval system consists of a matrix interval extension ${\displaystyle [\mathbf {A} ]\in [\mathbb {R} ]^{n\times m}}$ and an interval vector ${\displaystyle [\mathbf {b} ]\in [\mathbb {R} ]^{n}}$. We want the smallest cuboid ${\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{m}}$, for all vectors ${\displaystyle \mathbf {x} \in \mathbb {R} ^{m}}$ which there is a pair ${\displaystyle (\mathbf {A} ,\mathbf {b} )}$ with ${\displaystyle \mathbf {A} \in [\mathbf {A} ]}$ and ${\displaystyle \mathbf {b} \in [\mathbf {b} ]}$ satisfying

${\displaystyle \mathbf {A} \cdot \mathbf {x} =\mathbf {b} }$.

For quadratic systems in other words, for ${\displaystyle n=m}$ there can be such an interval vector ${\displaystyle [\mathbf {x} ]}$, which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities${\displaystyle [\mathbf {A} ]}$ and ${\displaystyle [\mathbf {b} ]}$ repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.

A rough solution ${\displaystyle [\mathbf {x} ]}$ can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the ${\displaystyle i}$-th row of the interval extension of the linear equation

${\displaystyle {\begin{pmatrix}{[a_{11}]}&\cdots &{[a_{1n}]}\\\vdots &\ddots &\vdots \\{[a_{n1}]}&\cdots &{[a_{nn}]}\end{pmatrix}}\cdot {\begin{pmatrix}{x_{1}}\\\vdots \\{x_{n}}\end{pmatrix}}={\begin{pmatrix}{[b_{1}]}\\\vdots \\{[b_{n}]}\end{pmatrix}}}$

can be determined by the variable ${\displaystyle x_{i}}$ if the division ${\displaystyle 1/[a_{ii}]}$ is allowed. It is therefore simultaneously

${\displaystyle x_{j}\in [x_{j}]}$ and ${\displaystyle x_{j}\in {\frac {[b_{i}]-\sum \limits _{k\not =j}[a_{ik}]\cdot [x_{k}]}{[a_{ii}]}}}$.

So we can now replace ${\displaystyle [x_{j}]}$ by

${\displaystyle [x_{j}]\cap {\frac {[b_{i}]-\sum \limits _{k\not =j}[a_{ik}]\cdot [x_{k}]}{[a_{ii}]}}}$,

and so the vector ${\displaystyle [\mathbf {x} ]}$ by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system ${\displaystyle [\mathbf {A} ]\cdot \mathbf {x} =[\mathbf {b} ]{\mbox{,}}}$ one can often try multiplying it by an appropriate rational matrix ${\displaystyle \mathbf {M} }$ with the resulting matrix equation

${\displaystyle (\mathbf {M} \cdot [\mathbf {A} ])\cdot \mathbf {x} =\mathbf {M} \cdot [\mathbf {b} ]}$

left to solve. If one chooses, for example, ${\displaystyle \mathbf {M} =\mathbf {A} ^{-1}}$ for the central matrix ${\displaystyle \mathbf {A} \in [\mathbf {A} ]}$, then ${\displaystyle \mathbf {M} \cdot [\mathbf {A} ]}$ is outer extension of the identity matrix.

These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices ${\displaystyle \mathbf {A} \in [\mathbf {A} ]}$ are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.

This is only suitable for systems of smaller dimension, since with a fully occupied ${\displaystyle n\times n}$ matrix, ${\displaystyle 2^{n^{2}}}$ real matrices need to be inverted, with ${\displaystyle 2^{n}}$ vectors for the right hand side. This approach was developed by Jiri Rohn and is still being developed. [5]

### Interval Newton method

An interval variant of Newton's method for finding the zeros in an interval vector ${\displaystyle [\mathbf {x} ]}$ can be derived from the average value extension. [6] For an unknown vector ${\displaystyle \mathbf {z} \in [\mathbf {x} ]}$ applied to ${\displaystyle \mathbf {y} \in [\mathbf {x} ]}$, gives

${\displaystyle f(\mathbf {z} )\in f(\mathbf {y} )+[J_{f}](\mathbf {[x]} )\cdot (\mathbf {z} -\mathbf {y} )}$.

For a zero ${\displaystyle \mathbf {z} }$, that is ${\displaystyle f(z)=0}$, and thus must satisfy

${\displaystyle f(\mathbf {y} )+[J_{f}](\mathbf {[x]} )\cdot (\mathbf {z} -\mathbf {y} )=0}$.

This is equivalent to ${\displaystyle \mathbf {z} \in \mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )}$. An outer estimate of ${\displaystyle [J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} ))}$ can be determined using linear methods.

In each step of the interval Newton method, an approximate starting value ${\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{n}}$ is replaced by ${\displaystyle [\mathbf {x} ]\cap \left(\mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )\right)}$ and so the result can be improved iteratively. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of ${\displaystyle f}$ were in the initial range ${\displaystyle [\mathbf {x} ]}$ if a Newton step produces the empty set.

The method converges on all zeros in the starting region. Division by zero can lead to separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method.

As an example, consider the function ${\displaystyle f(x)=x^{2}-2}$, the starting range ${\displaystyle [x]=[-2,2]}$, and the point ${\displaystyle y=0}$. We then have ${\displaystyle J_{f}(x)=2\,x}$ and the first Newton step gives

${\displaystyle [-2,2]\cap \left(0-{\frac {1}{2\cdot [-2,2]}}(0-2)\right)=[-2,2]\cap {\Big (}[{-\infty },{-0.5}]\cup [{0.5},{\infty }]{\Big )}=[{-2},{-0.5}]\cup [{0.5},{2}]}$.

More Newton steps are used separately on ${\displaystyle x\in [{-2},{-0.5}]}$ and ${\displaystyle [{0.5},{2}]}$. These converge to arbitrarily small intervals around ${\displaystyle -{\sqrt {2}}}$ and ${\displaystyle +{\sqrt {2}}}$.

The Interval Newton method can also be used with thick functions such as ${\displaystyle g(x)=x^{2}-[2,3]}$, which would in any case have interval results. The result then produces intervals containing ${\displaystyle \left[-{\sqrt {3}},-{\sqrt {2}}\right]\cup \left[{\sqrt {2}},{\sqrt {3}}\right]}$.

### Bisection and covers

The various interval methods deliver conservative results as dependencies between the sizes of different intervals extensions are not taken into account. However the dependency problem becomes less significant for narrower intervals.

Covering an interval vector ${\displaystyle [\mathbf {x} ]}$ by smaller boxes ${\displaystyle [\mathbf {x} _{1}],\ldots ,[\mathbf {x} _{k}],}$ so that

${\displaystyle [\mathbf {x} ]=\bigcup _{i=1}^{k}[\mathbf {x} _{i}],}$

is then valid for the range of values

${\displaystyle f([\mathbf {x} ])=\bigcup _{i=1}^{k}f([\mathbf {x} _{i}]).}$

So for the interval extensions described above the following holds:

${\displaystyle [f]([\mathbf {x} ])\supseteq \bigcup _{i=1}^{k}[f]([\mathbf {x} _{i}]).}$

Since ${\displaystyle [f]([\mathbf {x} ])}$ is often a genuine superset of the right-hand side, this usually leads to an improved estimate.

Such a cover can be generated by the bisection method such as thick elements ${\displaystyle [x_{i1},x_{i2}]}$ of the interval vector ${\displaystyle [\mathbf {x} ]=([x_{11},x_{12}],\ldots ,[x_{n1},x_{n2}])}$ by splitting in the centre into the two intervals ${\displaystyle \left[x_{i1},{\tfrac {1}{2}}(x_{i1}+x_{i2})\right]}$ and ${\displaystyle \left[{\tfrac {1}{2}}(x_{i1}+x_{i2}),x_{i2}\right].}$ If the result is still not suitable then further gradual subdivision is possible. A cover of ${\displaystyle 2^{r}}$ intervals results from ${\displaystyle r}$ divisions of vector elements, substantially increasing the computation costs.

With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.

## Application

Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation or stability analysis) to treat estimates with no exact numerical value. [7]

### Rounding error analysis

Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:

Error = ${\displaystyle \mathrm {abs} (a-b)}$ for a given interval ${\displaystyle [a,b]}$.

Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting.

### Tolerance analysis

Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely. [2]

If the behavior of such a system affected by tolerances satisfies, for example, ${\displaystyle f(\mathbf {x} ,\mathbf {p} )=0}$, for ${\displaystyle \mathbf {p} \in [\mathbf {p} ]}$ and unknown ${\displaystyle \mathbf {x} }$ then the set of possible solutions

${\displaystyle \{\mathbf {x} \,|\,\exists \mathbf {p} \in [\mathbf {p} ],f(\mathbf {x} ,\mathbf {p} )=0\}}$,

can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.

### Fuzzy interval arithmetic

Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements ${\displaystyle x\in [x]}$ and ${\displaystyle x\not \in [x]}$, intermediate values are also possible, to which real numbers ${\displaystyle \mu \in [0,1]}$ are assigned. ${\displaystyle \mu =1}$ corresponds to definite membership while ${\displaystyle \mu =0}$ is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.

For fuzzy arithmetic [8] only a finite number of discrete affiliation stages ${\displaystyle \mu _{i}\in [0,1]}$ are considered. The form of such a distribution for an indistinct value can then represented by a sequence of intervals

${\displaystyle \left[x^{(1)}\right]\supset \left[x^{(2)}\right]\supset \cdots \supset \left[x^{(k)}\right].}$

The interval ${\displaystyle \left[x^{(i)}\right]}$ corresponds exactly to the fluctuation range for the stage ${\displaystyle \mu _{i}.}$

The appropriate distribution for a function ${\displaystyle f(x_{1},\ldots ,x_{n})}$ concerning indistinct values ${\displaystyle x_{1},\ldots ,x_{n}}$ and the corresponding sequences

${\displaystyle \left[x_{1}^{(1)}\right]\supset \cdots \supset \left[x_{1}^{(k)}\right],\ldots ,\left[x_{n}^{(1)}\right]\supset \cdots \supset \left[x_{n}^{(k)}\right]}$

can be approximated by the sequence

${\displaystyle \left[y^{(1)}\right]\supset \cdots \supset \left[y^{(k)}\right],}$

where

${\displaystyle \left[y^{(i)}\right]=f\left(\left[x_{1}^{(i)}\right],\ldots \left[x_{n}^{(i)}\right]\right)}$

and can be calculated by interval methods. The value ${\displaystyle \left[y^{(1)}\right]}$ corresponds to the result of an interval calculation.

### Computer-assisted proof

Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor. [9] Thomas Hales used interval arithmetic in order to solve the Kepler conjecture.

## History

Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.

Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. [10] Arithmetic work on range numbers to improve the reliability of digital systems were then published in a 1951 textbook on linear algebra by Paul S. Dwyer  [ de ]; [11] intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958). [12]

The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. [13] [14] He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. [15] Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.

Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, [16] though Moore found the first non-trivial applications.

In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch [1] [17] and Götz Alefeld  [ de ] [18] at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, Karl Nickel  [ de ] explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimisation, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. [6] Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.

In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations. [19]

The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimisation, has contributed significantly to the unification of notation and terminology used in interval arithmetic. [20]

In recent years work has concentrated in particular on the estimation of preimages of parameterised functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France. [21]

## Implementations

There are many software packages that permit the development of numerical applications using interval arithmetic. [22] These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.

Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran and Pascal. [23] The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000 C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.

Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user friendly. It emphasized the efficient use of hardware, portability and independence of a particular presentation of intervals.

The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language. [24]

The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.

Gaol [25] is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming.

The Moore library [26] is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts´´ feature of C++.

The Julia programming language [27] has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package. [28]

In addition computer algebra systems, such as FriCAS, Mathematica, Maple, Maxima (software) [29] and MuPAD, can handle intervals. A Matlab extension Intlab [30] builds on BLAS routines, and the Toolbox b4m makes a Profil/BIAS interface. [30] [31] Moreover, the Software Euler Math Toolbox includes an interval arithmetic.

A library for the functional language OCaml was written in assembly language and C. [32]

## IEEE 1788 standard

A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. [33] Two reference implementations are freely available. [34] These have been developed by members of the standard's working group: The libieeep1788 [35] library for C++, and the interval package [36] for GNU Octave.

A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations. [37]

## Conferences and workshops

Several international conferences or workshop take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).

## Related Research Articles

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

In probability theory, the expected value of a random variable , denoted or , is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.

In mathematics, a product is the result of multiplication, or an expression that identifies factors to be multiplied. For example, 30 is the product of 6 and 5, and is the product of and .

In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor's series are named after Brook Taylor who introduced them in 1715.

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

In mathematics, the affinely extended real number system is obtained from the real number system by adding two infinity elements: and where the infinities are treated as actual numbers. It is useful in describing the algebra on infinities and the various limiting behaviors in calculus and mathematical analysis, especially in the theory of measure and integration. The affinely extended real number system is denoted or or

In mathematics, a Fourier series is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a synthesis of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving weights that describe a given function is a form of Fourier analysis. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck, and is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered the concept in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation, and in this context it is equivalent to the convection–diffusion equation. The case with zero diffusion is known in statistical mechanics as the Liouville equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion.

In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics, more specifically in multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In mathematics, a norm is a function from a real or complex vector space to the nonnegative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance of a vector from the origin is a norm, called the Euclidean norm, or 2-norm, which may also be defined as the square root of the inner product of a vector with itself.

In real analysis, the projectively extended real line, is the extension of the set of the real numbers, by a point denoted . It is thus the set with the standard arithmetic operations extended where possible, and is sometimes denoted by The added point is called the point at infinity, because it is considered as a neighbour of both ends of the real line. More precisely, the point at infinity is the limit of every sequence of real numbers whose absolute values are increasing and unbounded.

In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers , or a subset of that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.

The dyadic transformation is the mapping

In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.

## References

1. Kulisch, Ulrich W. (1989). Wissenschaftliches Rechnen mit Ergebnisverifikation. Eine Einführung (in German). Wiesbaden: Vieweg-Verlag. ISBN   3-528-08943-1.
2. Dreyer, Alexander (2003). Interval Analysis of Analog Circuits with Component Tolerances. Aachen, Germany: Shaker Verlag. p. 15. ISBN   3-8322-4555-3.
3. Complex interval arithmetic and its applications, Miodrag S. Petković, Ljiljana D. Petković, Wiley-VCH, 1998, ISBN   978-3-527-40134-5
4. Hend Dawood (2011). Theories of Interval Arithmetic: Mathematical Foundations and Applications. Saarbrücken: LAP LAMBERT Academic Publishing. ISBN   978-3-8465-0154-2.
5. "Jiri Rohn, List of publications". Archived from the original on 2008-11-23. Retrieved 2008-05-26.
6. Walster, G. William; Hansen, Eldon Robert (2004). Global Optimization using Interval Analysis (2nd ed.). New York, USA: Marcel Dekker. ISBN   0-8247-4059-9.
7. Jaulin, Luc; Kieffer, Michel; Didrit, Olivier; Walter, Eric (2001). Applied Interval Analysis. Berlin: Springer. ISBN   1-85233-219-0.
8. Tucker, Warwick (1999). The Lorenz attractor exists. Comptes Rendus de l'Académie des Sciences-Series I-Mathematics, 328(12), 1197-1202.
9. Young, Rosalind Cicely (1931). The algebra of many-valued quantities. Mathematische Annalen, 104(1), 260-290. (NB. A doctoral candidate at the University of Cambridge.)
10. Dwyer, Paul Sumner (1951). Linear computations. Oxford, England: Wiley. (University of Michigan)
11. Sunaga, Teruo (1958). "Theory of interval algebra and its application to numerical analysis". RAAG Memoirs (2): 29–46.
12. Moore, Ramon Edgar (1966). Interval Analysis. Englewood Cliff, New Jersey, USA: Prentice-Hall. ISBN   0-13-476853-1.
13. Cloud, Michael J.; Moore, Ramon Edgar; Kearfott, R. Baker (2009). Introduction to Interval Analysis. Philadelphia: Society for Industrial and Applied Mathematics (SIAM). ISBN   978-0-89871-669-6.
14. Hansen, Eldon Robert (2001-08-13). "Publications Related to Early Interval Work of R. E. Moore". University of Louisiana at Lafayette Press. Retrieved 2015-06-29.
15. Kulisch, Ulrich W. (1969). "Grundzüge der Intervallrechnung". In Laugwitz, Detlef (ed.). Jahrbuch Überblicke Mathematik (in German). 2. Mannheim, Germany: Bibliographisches Institut. pp. 51–98.
16. Alefeld, Götz; Herzberger, Jürgen. Einführung in die Intervallrechnung. Reihe Informatik (in German). 12. Mannheim, Wien, Zürich: B.I.-Wissenschaftsverlag. ISBN   3-411-01466-0.
17. Bounds for ordinary differential equations of Rudolf Lohner Archived 11 May 2018 at the Wayback Machine (in German)
18. Introductory Film (mpeg) of the COPRIN teams of INRIA, Sophia Antipolis
19. History of XSC-Languages Archived 2007-09-29 at the Wayback Machine
20. 1 2
21. Alliot, Jean-Marc; Gotteland, Jean-Baptiste; Vanaret, Charlie; Durand, Nicolas; Gianazza, David (2012). Implementing an interval computation library for OCaml on x86/amd64 architectures. 17th ACM SIGPLAN International Conference on Functional Programming.
22. Nathalie Revol (2015). The (near-)future IEEE 1788 standard for interval arithmetic, slides // SWIM 2015: 8th Small Workshop in Interval Methods. Prague, 9-11 June 2015
23. "IEEE Std 1788.1-2017 - IEEE Standard for Interval Arithmetic (Simplified)". IEEE Standard. IEEE Standards Association. 2017. Retrieved 2018-02-06.