In mathematics, the Weierstrass function, named after its discoverer, Karl Weierstrass, is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is also an example of a fractal curve.
The Weierstrass function has been historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. [1] Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves). [2]
In Weierstrass's original paper, the function was defined as a Fourier series:
where , is a positive odd integer, and
The minimum value of for which there exists such that these constraints are satisfied is . This construction, along with the proof that the function is not differentiable over any interval, was first delivered by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872. [3] [4] [5]
Despite being differentiable nowhere, the function is continuous: Since the terms of the infinite series which defines it are bounded by and this has finite sum for , convergence of the sum of the terms is uniform by the Weierstrass M-test with . Since each partial sum is continuous, by the uniform limit theorem, it follows that is continuous. Additionally, since each partial sum is uniformly continuous, it follows that is also uniformly continuous.
It might be expected that a continuous function must have a derivative, or that the set of points where it is not differentiable should be countably infinite or finite. According to Weierstrass in his paper, earlier mathematicians including Gauss had often assumed that this was true. This might be because it is difficult to draw or visualise a continuous function whose set of nondifferentiable points is something other than a countable set of points. Analogous results for better behaved classes of continuous functions do exist, for example the Lipschitz functions, whose set of non-differentiability points must be a Lebesgue null set (Rademacher's theorem). When we try to draw a general continuous function, we usually draw the graph of a function which is Lipschitz or otherwise well-behaved. Moreover, the fact that the set of non-differentiability points for a monotone function is measure-zero implies that the rapid oscillations of Weierstrass' function are necessary to ensure that it is nowhere-differentiable.
The Weierstrass function was one of the first fractals studied, although this term was not used until much later. The function has detail at every level, so zooming in on a piece of the curve does not show it getting progressively closer and closer to a straight line. Rather between any two points no matter how close, the function will not be monotone.
The computation of the Hausdorff dimension of the graph of the classical Weierstrass function was an open problem until 2018, while it was generally believed that . [6] [7] That D is strictly less than 2 follows from the conditions on and from above. Only after more than 30 years was this proved rigorously. [8]
The term Weierstrass function is often used in real analysis to refer to any function with similar properties and construction to Weierstrass's original example. For example, the cosine function can be replaced in the infinite series by a piecewise linear "zigzag" function. G. H. Hardy showed that the function of the above construction is nowhere differentiable with the assumptions . [9]
The Weierstrass function is based on the earlier Riemann function, claimed to be differentiable nowhere. Occasionally, this function has also been called the Weierstrass function. [10]
While Bernhard Riemann strongly claimed that the function is differentiable nowhere, no evidence of this was published by Riemann, and Weierstrass noted that he did not find any evidence of it surviving either in Riemann's papers or orally from his students.
In 1916, G. H. Hardy confirmed that the function does not have a finite derivative in any value of where x is irrational or is rational with the form of either or , where A and B are integers. [9] In 1969, Joseph Gerver found that the Riemann function has a defined differential on every value of x that can be expressed in the form of with integer A and B, or rational multipliers of pi with an odd numerator and denominator. On these points, the function has a derivative of . [11] In 1971, J. Gerver showed that the function has no finite differential at the values of x that can be expressed in the form of , completing the problem of the differentiability of the Riemann function. [12]
As the Riemann function is differentiable only on a null set of points, it is differentiable almost nowhere.
It is convenient to write the Weierstrass function equivalently as
for . Then is Hölder continuous of exponent α, which is to say that there is a constant C such that
for all and . [13] Moreover, is Hölder continuous of all orders but not Lipschitz continuous.
It turns out that the Weierstrass function is far from being an isolated example: although it is "pathological", it is also "typical" of continuous functions:
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every pointin. Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find such that, for that particular, falls within of whenever .
In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
In mathematics, a power series is an infinite series of the form where an represents the coefficient of the nth term and c is a constant. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function.
In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:
In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals, this typically involves unboundedness, either of the set over which the integral is taken or of the integrand, or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. If a regular definite integral is worked out as if it is improper, the same answer will result.
In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The th partial Fourier series of the function produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere except points of discontinuity.
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
In mathematics, a sequence (s1, s2, s3, ...) of real numbers is said to be equidistributed, or uniformly distributed, if the proportion of terms falling in a subinterval is proportional to the length of that subinterval. Such sequences are studied in Diophantine approximation theory and have applications to Monte Carlo integration.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.
Thomae's function is a real-valued function of a real variable that can be defined as:
Different definitions have been given for the dimension of a complex network or graph. For example, metric dimension is defined in terms of the resolving set for a graph. Dimension has also been defined based on the box covering method applied to graphs. Here we describe the definition based on the complex network zeta function. This generalises the definition based on the scaling property of the volume with distance. The best definition depends on the application.
In mathematics, the ATS theorem is the theorem on the approximation of a trigonometric sum by a shorter one. The application of the ATS theorem in certain problems of mathematical and theoretical physics can be very helpful.
In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as
In applied mathematics and mathematical analysis, the fractal derivative or Hausdorff derivative is a non-Newtonian generalization of the derivative dealing with the measurement of fractals, defined in fractal geometry. Fractal derivatives were created for the study of anomalous diffusion, by which traditional approaches fail to factor in the fractal nature of the media. A fractal measure t is scaled according to tα. Such a derivative is local, in contrast to the similarly applied fractional derivative. Fractal calculus is formulated as a generalization of standard calculus.
In the branch of mathematics known as integration theory, the McShane integral, created by Edward J. McShane, is a modification of the Henstock-Kurzweil integral. The McShane integral is equivalent to the Lebesgue integral.