Pointwise

Last updated

In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value of some function An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise.

Contents

Pointwise operations

Pointwise sum (upper plot, violet) and product (green) of the functions sin (lower plot, blue) and ln (red). The highlighted vertical slice shows the computation at the point x=2p. Pointwise sum and product of sin and ln function.png
Pointwise sum (upper plot, violet) and product (green) of the functions sin (lower plot, blue) and ln (red). The highlighted vertical slice shows the computation at the point x=2π.

Formal definition

A binary operation o: Y × YY on a set Y can be lifted pointwise to an operation O: (XY) × (XY) → (XY) on the set XY of all functions from X to Y as follows: Given two functions f1: XY and f2: XY, define the function O(f1, f2): XY by

(O(f1, f2))(x) = o(f1(x), f2(x)) for all xX.

Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity.[ citation needed ]

Examples

The pointwise addition of two functions and with the same domain and codomain is defined by:

The pointwise product or pointwise multiplication is:

The pointwise product with a scalar is usually written with the scalar term first. Thus, when is a scalar:

An example of an operation on functions which is not pointwise is convolution.

Properties

Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain. If is some algebraic structure, the set of all functions to the carrier set of can be turned into an algebraic structure of the same type in an analogous way.

Componentwise operations

Componentwise operations are usually defined on vectors, where vectors are elements of the set for some natural number and some field . If we denote the -th component of any vector as , then componentwise addition is .

Componentwise operations can be defined on matrices. Matrix addition, where is a componentwise operation while matrix multiplication is not.

A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector corresponds to the function such that , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors.

Pointwise relations

In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions AB can be ordered by defining fg if (∀x ∈ A) f(x) ≤ g(x). Pointwise orders also inherit some properties of the underlying posets. For instance if A and B are continuous lattices, then so is the set of functions AB with pointwise order. [1] Using the pointwise order on functions one can concisely define other important notions, for instance: [2]

An example of an infinitary pointwise relation is pointwise convergence of functionsa sequence of functions with converges pointwise to a function f if for each x in X

Notes

  1. Gierz et al., p. xxxiii
  2. Gierz, et al., p. 26

Related Research Articles

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector space and complex vector space are kinds of vector spaces based on different kinds of scalars: real coordinate space or complex coordinate space.

In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis.

<span class="mw-page-title-main">Affine transformation</span> Geometric transformation that preserves lines but not angles nor the origin

In Euclidean geometry, an affine transformation or affinity is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.

In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.

<span class="mw-page-title-main">Ellipsoid</span> Quadric surface that looks like a deformed sphere

An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.

In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices of the tensor are set equal to each other and summed over. In Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.

<span class="mw-page-title-main">Scalar multiplication</span> Algebraic operation

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar, and is to be distinguished from inner product of two vectors.

In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space.

<span class="mw-page-title-main">Quantum group</span> Algebraic construct of interest in theoretical physics

In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups. Despite their name, they do not themselves have a natural group structure, though they are in some sense 'close' to a group.

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.

In mathematics, a Casimir element is a distinguished element of the center of the universal enveloping algebra of a Lie algebra. A prototypical example is the squared angular momentum operator, which is a Casimir element of the three-dimensional rotation group.

<span class="mw-page-title-main">Curvilinear coordinates</span> Coordinate system whose directions vary in space

In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved.

In mathematics, a character group is the group of representations of a group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces do not in general form a group. Some important properties of these one-dimensional characters apply to characters in general:

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics, a zero element is one of several generalizations of the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context.

A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector ; the dimension of the function's domain has no relation to the dimension of its range.

In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

References

For order theory examples:

This article incorporates material from Pointwise on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.