Linearity

Last updated

In mathematics, the term linear is used in two distinct senses for two different properties:

Contents

An example of a linear function is the function defined by that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables and is

Linearity of a mapping is closely related to proportionality . Examples in physics include the linear relationship of voltage and current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are nonlinear .

Generalized for functions in more than one dimension, linearity means the property of a function of being compatible with addition and scaling, also known as the superposition principle.

Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line. In the term "linear equation", the word refers to the linearity of the polynomials involved.

Because a function such as is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from the context.

The word linear comes from Latin linearis, "pertaining to or resembling a line".

In mathematics

Linear maps

In mathematics, a linear map or linear function f(x) is a function that satisfies the two properties: [1]

These properties are known as the superposition principle. In this definition, x is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below).

Additivity alone implies homogeneity for rational α, since implies for any natural number n by mathematical induction, and then implies . The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear.

The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.

Linear polynomials

In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a straight line. [2]

Over the reals, a simple example of a linear equation is given by:

where m is often called the slope or gradient, and b the y-intercept, which gives the point of intersection between the graph of the function and the y-axis.

Note that this usage of the term linear is not the same as in the section above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if the constant termb in the example – equals 0. If b ≠ 0, the function is called an affine function (see in greater generality affine transformation).

Linear algebra is the branch of mathematics concerned with systems of linear equations.

Boolean functions

Hasse diagram of a linear Boolean function Hasse-linear.svg
Hasse diagram of a linear Boolean function

In Boolean algebra, a linear function is a function for which there exist such that

, where

Note that if , the above function is considered affine in linear algebra (i.e. not linear).

A Boolean function is linear if one of the following holds for the function's truth table:

  1. In every row in which the truth value of the function is T, there are an odd number of Ts assigned to the arguments, and in every row in which the function is F there is an even number of Ts assigned to arguments. Specifically, f(F, F, ..., F) = F, and these functions correspond to linear maps over the Boolean vector space.
  2. In every row in which the value of the function is T, there is an even number of Ts assigned to the arguments of the function; and in every row in which the truth value of the function is F, there are an odd number of Ts assigned to arguments. In this case, f(F, F, ..., F) = T.

Another way to express this is that each variable always makes a difference in the truth value of the operation or it never makes a difference.

Negation, Logical biconditional, exclusive or, tautology, and contradiction are linear functions.

Physics

In physics, linearity is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation. [3]

Linearity of a homogenous differential equation means that if two functions f and g are solutions of the equation, then any linear combination af + bg is, too.

In instrumentation, linearity means that a given change in an input variable gives the same change in the output of the measurement apparatus: this is highly desirable in scientific work. In general, instruments are close to linear over a certain range, and most useful within that range. In contrast, human senses are highly nonlinear: for instance, the brain completely ignores incoming light unless it exceeds a certain absolute threshold number of photons.

Linear motion traces a straight line trajectory.

Electronics

In electronics, the linear operating region of a device, for example a transistor, is where an output dependent variable (such as the transistor collector current) is directly proportional to an input dependent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier, which must amplify a signal without changing its waveform. Others are linear filters, and linear amplifiers in general.

In most scientific and technological, as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value. [4]

Integral linearity

For an electronic device (or other physical device) that converts a quantity to another quantity, Bertram S. Kolts writes: [5] [6]

There are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale, or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.

See also

Related Research Articles

<span class="mw-page-title-main">Asymptote</span> Limit of the tangent line at a point that tends to infinity

In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.

In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.

<span class="mw-page-title-main">Exponential function</span> Mathematical function, denoted exp(x) or e^x

The exponential function is a mathematical function denoted by or . Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the operation of taking powers of a number, but various modern definitions allow it to be rigorously extended to all real arguments , including irrational numbers. Its ubiquitous occurrence in pure and applied mathematics led mathematician Walter Rudin to consider the exponential function to be "the most important function in mathematics".

In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2yz + 1.

<span class="mw-page-title-main">Differential calculus</span> Area of mathematics; subarea of calculus

In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.

<span class="mw-page-title-main">Partial differential equation</span> Type of differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.

In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

In mathematics, the Wronskian of n differentiable functions is the determinant formed with the functions and their n – 1 first derivatives. It was introduced in 1812 by the Polish mathematician Jozef Hoene-Wronski, and is used in the study of differential equations, where it can sometimes show the linear independence of a set of solutions.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the degree. That is, if k is an integer, a function f of n variables is homogeneous of degree k if

<span class="mw-page-title-main">Confluent hypergeometric function</span> Solution of a confluent hypergeometric equation

In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:

<span class="mw-page-title-main">Critical point (mathematics)</span> Point where the derivative of a function is zero

In mathematics, a critical point is the argument of a function where the function derivative is zero . The value of the function at a critical point is a critical value.

In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.

<span class="mw-page-title-main">Symmetry in mathematics</span>

Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.

In mathematics, a variable is a symbol that represents a mathematical object. A variable may represent a number, a vector, a matrix, a function, the argument of a function, a set, or an element of a set.

<span class="mw-page-title-main">Linear function (calculus)</span> Polynomial function of degree at most one

In calculus and related areas of mathematics, a linear function from the real numbers to the real numbers is a function whose graph is a non-vertical line in the plane. The characteristic property of linear functions is that when the input variable is changed, the change in the output is proportional to the change in the input.

In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

References

  1. Edwards, Harold M. (1995). Linear Algebra. Springer. p. 78. ISBN   9780817637316.
  2. Stewart, James (2008). Calculus: Early Transcendentals, 6th ed., Brooks Cole Cengage Learning. ISBN   978-0-495-01166-8, Section 1.2
  3. Evans, Lawrence C. (2010) [1998], Partial differential equations (PDF), Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, R.I.: American Mathematical Society, doi:10.1090/gsm/019, ISBN   978-0-8218-4974-3, MR   2597943, archived (PDF) from the original on 2022-10-09
  4. Whitaker, Jerry C. (2002). The RF transmission systems handbook. CRC Press. ISBN   978-0-8493-0973-1.
  5. Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity" (PDF). analogZONE. Archived from the original (PDF) on February 4, 2012. Retrieved September 24, 2014.
  6. Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity". Foreign Electronic Measurement Technology. 24 (5): 30–31. Retrieved September 25, 2014.