In mathematics, a saddle point or minimax point [1] is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function. [2] An example of a saddle point is when there is a critical point with a relative minimum along one axial direction (between peaks) and at a relative maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function has a critical point at that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the -direction.
The name derives from the fact that the prototypical example in two dimensions is a surface that curves up in one direction, and curves down in a different direction, resembling a riding saddle. In terms of contour lines, a saddle point in two dimensions gives rise to a contour map with a pair of lines intersecting at the point. Such intersections are rare in actual ordnance survey maps, as the height of the saddle point is unlikely to coincide with the integer multiples used in such maps. Instead, the saddle point appears as a blank space in the middle of four sets of contour lines that approach and veer away from it. For a basic saddle point, these sets occur in pairs, with an opposing high pair and an opposing low pair positioned in orthogonal directions. The critical contour lines generally do not have to intersect orthogonally.
A simple criterion for checking if a given stationary point of a real-valued function F(x,y) of two real variables is a saddle point is to compute the function's Hessian matrix at that point: if the Hessian is indefinite, then that point is a saddle point. For example, the Hessian matrix of the function at the stationary point is the matrix
which is indefinite. Therefore, this point is a saddle point. This criterion gives only a sufficient condition. For example, the point is a saddle point for the function but the Hessian matrix of this function at the origin is the null matrix, which is not indefinite.
In the most general terms, a saddle point for a smooth function (whose graph is a curve, surface or hypersurface) is a stationary point such that the curve/surface/etc. in the neighborhood of that point is not entirely on any side of the tangent space at that point.
In a domain of one dimension, a saddle point is a point which is both a stationary point and a point of inflection. Since it is a point of inflection, it is not a local extremum.
A saddle surface is a smooth surface containing one or more saddle points.
Classical examples of two-dimensional saddle surfaces in the Euclidean space are second order surfaces, the hyperbolic paraboloid (which is often referred to as "the saddle surface" or "the standard saddle surface") and the hyperboloid of one sheet. The Pringles potato chip or crisp is an everyday example of a hyperbolic paraboloid shape.
Saddle surfaces have negative Gaussian curvature which distinguish them from convex/elliptical surfaces which have positive Gaussian curvature. A classical third-order saddle surface is the monkey saddle. [3]
In a two-player zero sum game defined on a continuous space, the equilibrium point is a saddle point.
For a second-order linear autonomous system, a critical point is a saddle point if the characteristic equation has one positive and one negative real eigenvalue. [4]
In optimization subject to equality constraints, the first-order conditions describe a saddle point of the Lagrangian.
In dynamical systems, if the dynamic is given by a differentiable map f then a point is hyperbolic if and only if the differential of ƒn (where n is the period of the point) has no eigenvalue on the (complex) unit circle when computed at the point. Then a saddle point is a hyperbolic periodic point whose stable and unstable manifolds have a dimension that is not zero.
A saddle point of a matrix is an element which is both the largest element in its column and the smallest element in its row.
In geometry, a paraboloid is a quadric surface that has exactly one axis of symmetry and no center of symmetry. The term "paraboloid" is derived from parabola, which refers to a conic section that has a similar property of symmetry.
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.
In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the line perpendicular to the tangent line to the curve at the point.
In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature. They are named after Carl Gustav Jacob Jacobi.
In differential geometry, the Gaussian curvature or Gauss curvatureΚ of a smooth surface in three-dimensional space at a point is the product of the principal curvatures, κ1 and κ2, at the given point: For example, a sphere of radius r has Gaussian curvature 1/r2 everywhere, and a flat plane and a cylinder have Gaussian curvature zero everywhere. The Gaussian curvature can also be negative, as in the case of a hyperboloid or the inside of a torus.
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range or on the entire domain of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
In mathematics, specifically in differential topology, Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse, a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology.
In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph of a function, it is a point where the function changes from being concave to convex, or vice versa.
In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
In geometry, a surface in 3-dimensional Euclidean space S is ruled if through every point of S, there is a straight line that lies on S. Examples include the plane, the lateral surface of a cylinder or cone, a conical surface with elliptical directrix, the right conoid, the helicoid, and the tangent developable of a smooth curve in space.
In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.
In mathematics, the monkey saddle is the surface defined by the equation
In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" increasing or decreasing.
In calculus, Newton's method is an iterative method for finding the roots of a differentiable function , which are solutions to the equation . However, to optimize a twice-differentiable , our goal is to find the roots of . We can therefore use Newton's method on its derivative to find solutions to , also known as the critical points of . These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function .
In mathematics, a critical point is the argument of a function where the function derivative is zero . The value of the function at a critical point is a critical value.
In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.
In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others.
In mathematics, the differential geometry of surfaces deals with the differential geometry of smooth surfaces with various additional structures, most often, a Riemannian metric.
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.