In mathematics, the Iverson bracket, named after Kenneth E. Iverson, is a notation that generalises the Kronecker delta, which is the Iverson bracket of the statement x = y. It maps any statement to a function of the free variables in that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets: In other words, the Iverson bracket of a statement is the indicator function of the set of values for which the statement is true.
The Iverson bracket allows using capital-sigma notation without restriction on the summation index. That is, for any property of the integer , one can rewrite the restricted sum in the unrestricted form . With this convention, does not need to be defined for the values of k for which the Iverson bracket equals 0; that is, a summand must evaluate to 0 regardless of whether is defined.
The notation was originally introduced by Kenneth E. Iverson in his programming language APL, [1] [2] though restricted to single relational operators enclosed in parentheses, while the generalisation to arbitrary statements, notational restriction to square brackets, and applications to summation, was advocated by Donald Knuth to avoid ambiguity in parenthesized logical expressions. [3]
There is a direct correspondence between arithmetic on Iverson brackets, logic, and set operations. For instance, let A and B be sets and any property of integers; then we have
The notation allows moving boundary conditions of summations (or integrals) as a separate factor into the summand, freeing up space around the summation operator, but more importantly allowing it to be manipulated algebraically.
We mechanically derive a well-known sum manipulation rule using Iverson brackets:
The well-known rule is likewise easily derived:
For instance, Euler's totient function that counts the number of positive integers up to n which are coprime to n can be expressed by
Another use of the Iverson bracket is to simplify equations with special cases. For example, the formula
is valid for n> 1 but is off by 1/2 for n = 1. To get an identity valid for all positive integers n (i.e., all values for which is defined), a correction term involving the Iverson bracket may be added:
Many common functions, especially those with a natural piecewise definition, may be expressed in terms of the Iverson bracket. The Kronecker delta notation is a specific case of Iverson notation when the condition is equality. That is,
The indicator function of a set , often denoted , or , is an Iverson bracket with set membership as its condition:
The Heaviside step function, sign function, [1] and absolute value function are also easily expressed in this notation:
and
The comparison functions max and min (returning the larger or smaller of two arguments) may be written as and
The floor and ceiling functions can be expressed as and where the index of summation is understood to range over all the integers.
The ramp function can be expressed
The trichotomy of the reals is equivalent to the following identity:
The Möbius function has the property (and can be defined by recurrence as [4] )
In the 1830s, Guglielmo dalla Sommaja used the expression to represent what now would be written ; he also used variants, such as for . [3] Following one common convention, those quantities are equal where defined: is 1 if x > 0, is 0 if x = 0, and is undefined otherwise.
In addition to the now-standard square brackets [ · ] , and the original parentheses ( · ) , blackboard bold brackets have also been used, e.g. ⟦ · ⟧ , as well as other unusual forms of bracketing marks available in the publisher's typeface, accompanied by a marginal note.
In mathematics, the associative property is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers n ≥ k ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as or , and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n.
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
In machine learning, support vector machines are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories, SVMs are one of the most studied models, being based on statistical learning frameworks of VC theory proposed by Vapnik and Chervonenkis (1974).
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.
In mathematics, the error function, often denoted by erf, is a function defined as:
In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because .
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.
In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi. Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n-matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1)-submatrices of B. Specifically, for every i, the Laplace expansion along the ith row is the equality where is the entry of the ith row and jth column of B, and is the determinant of the submatrix obtained by removing the ith row and the jth column of B. Similarly, the Laplace expansion along the jth column is the equality (Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
In number theory, the prime omega functions and count the number of prime factors of a natural number Thereby counts each distinct prime factor, whereas the related function counts the total number of prime factors of honoring their multiplicity. That is, if we have a prime factorization of of the form for distinct primes , then the respective prime omega functions are given by and . These prime factor counting functions have many important number theoretic relations.
The purpose of this page is to catalog new, interesting, and useful identities related to number-theoretic divisor sums, i.e., sums of an arithmetic function over the divisors of a natural number , or equivalently the Dirichlet convolution of an arithmetic function with one: