Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
This glossary of calculus is a list of definitions about calculus , its sub-disciplines, and related fields.
Part of a series of articles about |
Calculus |
---|
for all x in X. A function that is not bounded is said to be unbounded.
Sometimes, if f(x) ≤ A for all x in X, then the function is said to be bounded above by A. On the other hand, if f(x) ≥ B for all x in X, then the function is said to be bounded below by B.This may equivalently be expressed in terms of the variable. Let F = f∘g, or equivalently, F(x) = f(g(x)) for all x. Then one can also write
The chain rule may be written in Leibniz's notation in the following way. If a variable z depends on the variable y, which itself depends on the variable x, so that y and z are therefore dependent variables, then z, via the intermediate variable of y, depends on x as well. The chain rule then states,
The two versions of the chain rule are related; if and , then
A series is convergent if the sequence of its partial sums tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number such that for any arbitrarily small positive number , there is a (sufficiently large) integer such that for all ,
If the series is convergent, the number (necessarily unique) is called the sum of the series.
Any series that is not convergent is said to be divergent.where is the derivative of f with respect to x, and dx is an additional real variable (so that dy is a function of x and dx). The notation is such that the equation
holds, where the derivative is represented in the Leibniz notation dy/dx, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes
where M is some constant, then the series
Then, the point is an essential discontinuity.
In this case, doesn't exist and is infinite – thus satisfying twice the conditions of essential discontinuity. So x0 is an essential discontinuity, infinite discontinuity, or discontinuity of the second kind. (This is distinct from the term essential singularity which is often used when studying functions of complex variables.where b is a positive real number, and in which the argument x occurs as an exponent. For real numbers c and d, a function of the form is also an exponential function, as it can be rewritten as
A related theorem is the boundedness theorem which states that a continuous function f in the closed interval [a,b] is bounded on that interval. That is, there exist real numbers m and M such that:
where the sum is over all n-tuples of nonnegative integers (m1, …, mn) satisfying the constraint
Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit:
Combining the terms with the same value of m1 + m2 + ... + mn = k and noticing that m j has to be zero for j > n − k + 1 leads to a somewhat simpler formula expressed in terms of Bell polynomials Bn,k(x1,...,xn−k+1):
and of the integration operator J
and developing a calculus for such operators generalizing the classical one.
In this context, the term powers refers to iterative application of a linear operator to a function, in some analogy to function composition acting on a variable, i.e. f ∘2(x) = f ∘ f (x) = f ( f (x) ).where is the binomial coefficient and
This can be proved by using the product rule and mathematical induction.where −a/d is not a natural number and kis a natural number.
Equivalently, a sequence is a harmonic progression when each term is the harmonic mean of the neighboring terms.
It is not possible for a harmonic progression (other than the trivial case where a = 1 and k = 0) to sum to an integer. The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator. [53]where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form
which is easy to solve by integration of the two members.
Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term.or
From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time (v vs. t graph) is the displacement, x. In calculus terms, the integral of the velocity function v(t) is the displacement function x(t). In the figure, this corresponds to the yellow area under the curve labeled s (s being an alternative notation for displacement).
or more compactly:
Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. [62] [63] More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts.
.Then, the point x0 = 1 is a jump discontinuity.
In this case, a single limit does not exist because the one-sided limits, L− and L+, exist and are finite, but are not equal: since, L− ≠ L+, the limit L does not exist. Then, x0 is called a jump discontinuity, step discontinuity, or discontinuity of the first kind. For this type of discontinuity, the function f may have any value at x0.with at least one of the coefficients a, b, c, d, e, or f of the second-degree terms being non-zero.
A univariate (single-variable) quadratic function has the form [78]
in the single variable x. The graph of a univariate quadratic function is a parabola whose axis of symmetry is parallel to the y-axis, as shown at right.
If the quadratic function is set equal to zero, then the result is a quadratic equation. The solutions to the univariate equation are called the roots of the univariate function.
The bivariate case in terms of variables x and y has the form
with at least one of a, b, c not equal to zero, and an equation setting this function equal to zero gives rise to a conic section (a circle or other ellipse, a parabola, or a hyperbola).
In general there can be an arbitrarily large number of variables, in which case the resulting surface is called a quadric, but the highest degree term must be of degree 2, such as x2, xy, yz, etc.In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f. The process of solving for antiderivatives is called antidifferentiation, and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter.
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.
In calculus, the constant of integration, often denoted by , is a constant term added to an antiderivative of a function to indicate that the indefinite integral of , on a connected domain, is only defined up to an additive constant. This constant expresses an ambiguity inherent in the construction of antiderivatives.
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
In mathematics, an implicit equation is a relation of the form where R is a function of several variables. For example, the implicit equation of the unit circle is
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In multivariate calculus, a differential or differential form is said to be exact or perfect, as contrasted with an inexact differential, if it is equal to the general differential for some differentiable function in an orthogonal coordinate system.
In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively.
In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
Multivariable calculus is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.
In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or indefinite integral, of a given function f(x), i.e. to find a formula for a differentiable function F(x) such that
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. Roughly speaking, the two operations can be thought of as inverses of each other.
In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by where is the derivative of f with respect to , and is an additional real variable. The notation is such that the equation
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.