WikiMili The Free Encyclopedia

In mathematics, **Poinsot's spirals** are two spirals represented by the polar equations

**Mathematics** includes the study of such topics as quantity, structure, space, and change.

In mathematics, a **spiral** is a curve which emanates from a point, moving farther away as it revolves around the point.

where csch is the hyperbolic cosecant, and sech is the hyperbolic secant.^{ [1] } They are named after the French mathematician Louis Poinsot.

**Louis Poinsot** was a French mathematician and physicist. Poinsot was the inventor of geometrical mechanics, showing how a system of forces acting on a rigid body could be resolved into a single force and a couple.

In mathematics, **hyperbolic functions** are analogs of the ordinary trigonometric, or circular, functions.

In mathematics, the **Laplace operator** or **Laplacian** is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, ∇^{2}, or Δ. The Laplacian Δ*f*(*p*) of a function *f* at a point *p*, is the rate at which the average value of *f* over spheres centered at *p* deviates from *f*(*p*) as the radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.

A **hyperbolic spiral** is a transcendental plane curve also known as a **reciprocal spiral**. A hyperbolic spiral is the inverse of an Archimedean spiral and is a type of Cotes' spiral.

In statistics, the **mean squared error** (**MSE**) or **mean squared deviation** (**MSD**) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and what is estimated. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.

The **Gudermannian function**, named after Christoph Gudermann (1798–1852), relates the circular functions and hyperbolic functions without explicitly using complex numbers.

In statistics, the **Lehmann–Scheffé theorem** is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.

In mathematics, the **Clausen function**, introduced by Thomas Clausen (1832), is a transcendental, special function of a single variable. It can variously be expressed in the form of a definite integral, a trigonometric series, and various other special functions. It is intimately connected with the polylogarithm, inverse tangent integral, polygamma function, Riemann zeta function, Dirichlet eta function, and Dirichlet beta function.

In mathematics, the **Jacobi elliptic functions** are a set of basic elliptic functions, and auxiliary theta functions, that are of historical importance. They are found in the description of the motion of a pendulum, as well as in the design of the electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation *sn* for *sin*. The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829).

In estimation theory and statistics, the **Cramér–Rao bound (CRB)**, **Cramér–Rao lower bound (CRLB)**, **Cramér–Rao inequality**, **Frechét–Darmois–Cramér–Rao inequality**, or **information inequality** expresses a lower bound on the variance of unbiased estimators of a deterministic parameter. This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.

In statistics, the **score**, **score function**, **efficient score** or **informant** indicates how sensitive a likelihood function is to its parameter . Explicitly, the score for is the gradient of the log-likelihood with respect to .

This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.

In trigonometry, **tangent half-angle formulas** relate the tangent of half of an angle to trigonometric functions of the entire angle. Among these are the following

In hyperbolic geometry, the **angle of parallelism ** , is the angle at one vertex of a right hyperbolic triangle that has two asymptotic parallel sides. The angle depends on the segment length *a* between the right angle and the vertex of the angle of parallelism.

In statistics a **minimum-variance unbiased estimator (MVUE)** or **uniformly minimum-variance unbiased estimator (UMVUE)** is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter.

**Stein's example** is an important result in decision theory which can be stated as

In probability and statistics, a **natural exponential family (NEF)** is a class of probability distributions that is a special case of an exponential family (EF). Every distribution possessing a moment-generating function is a member of a natural exponential family, and the use of such distributions simplifies the theory and computation of generalized linear models.

The **differentiation of trigonometric functions** is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. Common trigonometric functions include sin(*x*), cos(*x*) and tan(*x*). For example, the derivative of *f*(*x*) = sin(*x*) is represented as *f* ′(*a*) = cos(*a*). *f* ′(*a*) is the rate of change of sin(*x*) at a particular point *a*.

In statistical hypothesis testing, a **uniformly most powerful** (**UMP**) **test** is a hypothesis test which has the **greatest power ** among all possible tests of a given size *α*. For example, according to the Neyman–Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses.

In physics and in the mathematics of plane curves, **Cotes's spiral** is a family of spirals named after Roger Cotes.

In the hyperbolic plane, as in the Euclidean plane, each point can be uniquely identified by two real numbers. Several qualitatively different ways of coordinatizing the plane in hyperbolic geometry are used.

- ↑ Lawrence, J. Dennis (1972).
*A Catalog of Special Plane Curves*. New York: Dover. pp. 192–194. ISBN 0486602885.

This geometry-related article is a stub. You can help Wikipedia by expanding it. |

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.