Maximum and minimum

Last updated
Local and global maxima and minima for cos(3px)/x, 0.1<= x <=1.1 Extrema example original.svg
Local and global maxima and minima for cos(3πx)/x, 0.1 x 1.1

In mathematical analysis, the maximum and minimum [lower-alpha 1] of a function are, respectively, the largest and smallest value taken by the function. Known generically as extremum, [lower-alpha 2] they may be defined either within a given range (the local or relative extrema) or on the entire domain (the global or absolute extrema) of a function. [1] [2] [3] Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.

Contents

As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.

In statistics, the corresponding concept is the sample maximum and minimum.

Definition

A real-valued function f defined on a domain X has a global (or absolute) maximum point at x, if f(x) ≥ f(x) for all x in X. Similarly, the function has a global (or absolute) minimum point at x, if f(x) ≤ f(x) for all x in X. The value of the function at a maximum point is called the maximum value of the function, denoted , and the value of the function at a minimum point is called the minimum value of the function. Symbolically, this can be written as follows:

is a global maximum point of function if

The definition of global minimum point also proceeds similarly.

If the domain X is a metric space, then f is said to have a local (or relative) maximum point at the point x, if there exists some ε > 0 such that f(x) ≥ f(x) for all x in X within distance ε of x. Similarly, the function has a local minimum point at x, if f(x) ≤ f(x) for all x in X within distance ε of x. A similar definition can be used when X is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows:

Let be a metric space and function . Then is a local maximum point of function if such that

The definition of local minimum point can also proceed similarly.

In both the global and local cases, the concept of a strict extremum can be defined. For example, x is a strict global maximum point if for all x in X with xx, we have f(x) > f(x), and x is a strict local maximum point if there exists some ε > 0 such that, for all x in X within distance ε of x with xx, we have f(x) > f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points.

A continuous real-valued function with a compact domain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and bounded interval of real numbers (see the graph above).

Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the largest (or smallest) one.

For differentiable functions, Fermat's theorem states that local extrema in the interior of a domain must occur at critical points (or points where the derivative equals zero). [4] However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using the first derivative test, second derivative test, or higher-order derivative test, given sufficient differentiability. [5]

For any function that is defined piecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is largest (or smallest).

Examples

The global maximum of [?]x occurs at x = e. Xth root of x.svg
The global maximum of x occurs at x = e .
FunctionMaxima and minima
x2Unique global minimum at x = 0.
x3No global minima or maxima. Although the first derivative (3x2) is 0 at x = 0, this is an inflection point. (2nd derivative is 0 at that point.)
Unique global maximum at x = e . (See figure at right)
xxUnique global maximum over the positive real numbers at x = 1/e.
x3/3 − xFirst derivative x2 − 1 and second derivative 2x. Setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative, we can see that −1 is a local maximum and +1 is a local minimum. This function has no global maximum or minimum.
|x|Global minimum at x = 0 that cannot be found by taking derivatives, because the derivative does not exist at x = 0.
cos(x)Infinitely many global maxima at 0, ±2π, ±4π, ..., and infinitely many global minima at ±π, ±3π, ±5π, ....
2 cos(x) − xInfinitely many local maxima and minima, but no global maximum or minimum.
cos(3πx)/x with 0.1 ≤ x ≤ 1.1Global maximum at x = 0.1 (a boundary), a global minimum near x = 0.3, a local maximum near x = 0.6, and a local minimum near x = 1.0. (See figure at top of page.)
x3 + 3x2 − 2x + 1 defined over the closed interval (segment) [−4,2]Local maximum at x = −1−15/3, local minimum at x = −1+15/3, global maximum at x = 2 and global minimum at x = −4.

For a practical example, [6] assume a situation where someone has feet of fencing and is trying to maximize the square footage of a rectangular enclosure, where is the length, is the width, and is the area:

The derivative with respect to is:

Setting this equal to

reveals that is our only critical point. Now retrieve the endpoints by determining the interval to which is restricted. Since width is positive, then , and since , that implies that . Plug in critical point , as well as endpoints and , into , and the results are and respectively.

Therefore, the greatest area attainable with a rectangle of feet of fencing is . [6]

Functions of more than one variable

Peano surface, a counterexample to some criteria of local maxima of the 19th century Modell einer Peanoschen Flache -Schilling XLIX, 1-.jpg
Peano surface, a counterexample to some criteria of local maxima of the 19th century
The global maximum is the point at the top MaximumParaboloid.png
The global maximum is the point at the top
Counterexample: The red dot shows a local minimum that is not a global minimum MaximumCounterexample.png
Counterexample: The red dot shows a local minimum that is not a global minimum

For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for a local maximum are similar to those of a function with only one variable. The first partial derivatives as to z (the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of a saddle point. For use of these conditions to solve for a maximum, the function z must also be differentiable throughout. The second partial derivative test can help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable function f defined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use the intermediate value theorem and Rolle's theorem to prove this by contradiction). In two and more dimensions, this argument fails. This is illustrated by the function

whose only critical point is at (0,0), which is a local minimum with f(0,0) = 0. However, it cannot be a global one, because f(2,3) = −5.

Maxima or minima of a functional

If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of a functional), then the extremum is found using the calculus of variations.

In relation to sets

Maxima and minima can also be defined for sets. In general, if an ordered set S has a greatest element m, then m is a maximal element of the set, also denoted as . Furthermore, if S is a subset of an ordered set T and m is the greatest element of S with (respect to order induced by T), then m is a least upper bound of S in T. Similar results hold for least element, minimal element and greatest lower bound. The maximum and minimum function for sets are used in databases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions.

In the case of a general partial order, the least element (i.e., one that is smaller than all others) should not be confused with a minimal element (nothing is smaller). Likewise, a greatest element of a partially ordered set (poset) is an upper bound of the set which is contained within the set, whereas a maximal elementm of a poset A is an element of A such that if mb (for any b in A), then m = b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable.

In a totally ordered set, or chain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the terms minimum and maximum.

If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set of natural numbers has no maximum, though it has a minimum. If an infinite chain S is bounded, then the closure Cl(S) of the set occasionally has a minimum and a maximum, in which case they are called the greatest lower bound and the least upper bound of the set S, respectively.

Argument of the maximum

As an example, both unnormalised and normalised sinc functions above have
argmax
{\displaystyle \operatorname {argmax} }
of {0} because both attain their global maximum value of 1 at x = 0.

The unnormalised sinc function (red) has arg min of {-4.49, 4.49}, approximately, because it has 2 global minimum values of approximately -0.217 at x = +-4.49. However, the normalised sinc function (blue) has arg min of {-1.43, 1.43}, approximately, because their global minima occur at x = +-1.43, even though the minimum value is the same. Si sinc.svg
As an example, both unnormalised and normalised sinc functions above have of {0} because both attain their global maximum value of 1 at x = 0.

The unnormalised sinc function (red) has arg min of {4.49, 4.49}, approximately, because it has 2 global minimum values of approximately 0.217 at x = ±4.49. However, the normalised sinc function (blue) has arg min of {1.43, 1.43}, approximately, because their global minima occur at x = ±1.43, even though the minimum value is the same.
In mathematics, the arguments of the maxima (abbreviated arg max or argmax) are the points, or elements, of the domain of some function at which the function values are maximized. [8] In contrast to global maxima, which refers to the largest outputs of a function, arg max refers to the inputs, or arguments, at which the function outputs are as large as possible.

See also

Notes

  1. PL: maxima and minima (or maximums and minimums).
  2. PL: extrema.

Related Research Articles

<span class="mw-page-title-main">Uniform continuity</span> Uniform restraint of the change in functions

In mathematics, a real function of real numbers is said to be uniformly continuous if there is a positive real number such that function values over any function domain interval of the size are as close to each other as we want. In other words, for a uniformly continuous real function of real numbers, if we want function value differences to be less than any positive real number , then there is a positive real number such that at any and in any function interval of the size .

<span class="mw-page-title-main">Differential calculus</span> Area of mathematics; subarea of calculus

In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

<span class="mw-page-title-main">Inflection point</span> Point where the curvature of a curve changes sign

In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph of a function, it is a point where the function changes from being concave to convex, or vice versa.

This is a glossary of some terms used in various branches of mathematics that are related to the fields of order, lattice, and domain theory. Note that there is a structured list of order topics available as well. Other helpful resources might be the following overview articles:

<span class="mw-page-title-main">Saddle point</span> Critical point on a surface graph which is not a local extremum

In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero, but which is not a local extremum of the function. An example of a saddle point is when there is a critical point with a relative minimum along one axial direction and at a relative maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function has a critical point at that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the -direction.

In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.

<span class="mw-page-title-main">Stationary point</span> Zero of the derivative of a function

In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" increasing or decreasing.

<span class="mw-page-title-main">Approximation theory</span> Theory of getting acceptably close inexact mathematical calculations

In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by best and simpler will depend on the application.

<span class="mw-page-title-main">Newton's method in optimization</span> Method for finding stationary points of a function

In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function F, which are solutions to the equation F (x) = 0. As such, Newton's method can be applied to the derivative f of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x) = 0), also known as the critical points of f. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function f.

<span class="mw-page-title-main">Golden-section search</span> Technique for finding an extremum of a function

The golden-section search is a technique for finding an extremum of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema, it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953).

<span class="mw-page-title-main">Critical point (mathematics)</span> Point where the derivative of a function is zero

Critical point is a term used in many branches of mathematics.

In mathematics, Fermat's theorem is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point. Fermat's theorem is a theorem in real analysis, named after Pierre de Fermat.

In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is convolution.

Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.

In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems. The method is a generalization of the classical method of Lagrange multipliers as used to find extrema of a function of finitely many variables.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

<span class="mw-page-title-main">Peano surface</span>

In mathematics, the Peano surface is the graph of the two-variable function

References

  1. Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN   978-0-495-01166-8.
  2. Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN   978-0-547-16702-2.
  3. Thomas, George B.; Weir, Maurice D.; Hass, Joel (2010). Thomas' Calculus: Early Transcendentals (12th ed.). Addison-Wesley. ISBN   978-0-321-58876-0.
  4. Weisstein, Eric W. "Minimum". mathworld.wolfram.com. Retrieved 2020-08-30.
  5. Weisstein, Eric W. "Maximum". mathworld.wolfram.com. Retrieved 2020-08-30.
  6. 1 2 Garrett, Paul. "Minimization and maximization refresher".
  7. "The Unnormalized Sinc Function Archived 2017-02-15 at the Wayback Machine ", University of Sydney
  8. For clarity, we refer to the input (x) as points and the output (y) as values; compare critical point and critical value.