# Triviality (mathematics)

Last updated

In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). [1] [2] The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. [1] [3] The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. [2]

## Trivial and nontrivial solutions

In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others

"Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation

${\displaystyle y'=y}$

where ${\displaystyle y=y(x)}$ is a function whose derivative is ${\displaystyle y'}$. The trivial solution is the zero function

${\displaystyle y(x)=0}$

while a nontrivial solution is the exponential function

${\displaystyle y(x)=e^{x}.}$

The differential equation ${\displaystyle f''(x)=-\lambda f(x)}$ with boundary conditions ${\displaystyle f(0)=f(L)=0}$ is important in math and physics, as it could be used to describe a particle in a box in quantum mechanics, or a standing wave on a string. It always includes the solution ${\displaystyle f(x)=0}$, which is considered obvious and hence is called the "trivial" solution. In some cases, there may be other solutions (sinusoids), which are called "nontrivial" solutions. [4]

Similarly, mathematicians often describe Fermat's Last Theorem as asserting that there are no nontrivial integer solutions to the equation ${\displaystyle a^{n}+b^{n}=c^{n}}$, where n is greater than 2. Clearly, there are some solutions to the equation. For example, ${\displaystyle a=b=c=0}$ is a solution for any n, but such solutions are obvious and obtainable with little effort, and hence "trivial".

## In mathematical reasoning

Trivial may also refer to any easy case of a proof, which for the sake of completeness cannot be ignored. For instance, proofs by mathematical induction have two parts: the "base case" which shows that the theorem is true for a particular initial value (such as n = 0 or n = 1), and the inductive step which shows that if the theorem is true for a certain value of n, then it is also true for the value n + 1. The base case is often trivial and is identified as such, although there are situations where the base case is difficult but the inductive step is trivial. Similarly, one might want to prove that some property is possessed by all the members of a certain set. The main part of the proof will consider the case of a nonempty set, and examine the members in detail; in the case where the set is empty, the property is trivially possessed by all the members, since there are none (see vacuous truth for more).

A common joke in the mathematical community is to say that "trivial" is synonymous with "proved"—that is, any theorem can be considered "trivial" once it is known to be true. [1]

Another joke concerns two mathematicians who are discussing a theorem: the first mathematician says that the theorem is "trivial". In response to the other's request for an explanation, he then proceeds with twenty minutes of exposition. At the end of the explanation, the second mathematician agrees that the theorem is trivial. These jokes point out the subjectivity of judgments about triviality. The joke also applies when the first mathematician says the theorem is trivial, but is unable to prove it himself. Often, as a joke, the theorem is then referred to as "intuitively obvious". Someone experienced in calculus, for example, would consider the following statement trivial:

${\displaystyle \int _{0}^{1}x^{2}\,dx={\frac {1}{3}}}$

However, to someone with no knowledge of integral calculus, this is not obvious at all.

Triviality also depends on context. A proof in functional analysis would probably, given a number, trivially assume the existence of a larger number. However, when proving basic results about the natural numbers in elementary number theory, the proof may very well hinge on the remark that any natural number has a successor—a statement which should itself be proved or be taken as an axiom (for more, see Peano's axioms).

### Trivial proofs

In some texts, a trivial proof refers to a statement involving a material implication PQ, where the consequent, Q, is always true. [5] Here, the proof follows immediately by virtue of the definition of material implication, as the implication is true regardless of the truth value of the antecedent P. [5]

A related concept is a vacuous truth, where the antecedent P in the material implication PQ is always false. [5] Here, the implication is always true regardless of the truth value of the consequent Q—again by virtue of the definition of material implication. [5]

## Examples

• In number theory, it is often important to find factors of an integer number N. Any number N has four obvious factors: ±1 and ±N. These are called "trivial factors". Any other factor, if it exists, would be called "nontrivial". [6]
• The homogeneous matrix equation ${\displaystyle A\mathbf {x} =\mathbf {0} }$, where ${\displaystyle A}$ is a fixed matrix, ${\displaystyle \mathbf {x} }$ is an unknown vector, and ${\displaystyle \mathbf {0} }$ is the zero vector, has an obvious solution ${\displaystyle \mathbf {x} =\mathbf {0} }$. This is called the "trivial solution". If it has other solutions ${\displaystyle \mathbf {x} \neq \mathbf {0} }$, then they would be called "nontrivial" [7]
• In group theory, there is a very simple group with just one element in it; this is often called the "trivial group". All other groups, which are more complicated, are called "nontrivial".
• In graph theory, the trivial graph is a graph which has only 1 vertex and no edge.
• Database theory has a concept called functional dependency, written ${\displaystyle X\to Y}$. The dependence ${\displaystyle X\to Y}$ is true if Y is a subset of X, so this type of dependence is called "trivial". All other dependences, which are less obvious, are called "nontrivial".
• It can be shown that Riemann's zeta function has zeros at the negative even numbers −2, −4, … Though the proof is comparatively easy, this result would still not normally be called trivial; however, it is in this case, for its other zeros are generally unknown and have important applications and involve open questions (such as the Riemann hypothesis). Accordingly, the negative even numbers are called the trivial zeros of the function, while any other zeros are considered to be non-trivial.

## Related Research Articles

In mathematics, a Diophantine equation is a polynomial equation, usually involving two or more unknowns, such that the only solutions of interest are the integer ones. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.

In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior, unlike a syntactic property. A property is non-trivial if it is neither true for every partial computable function, nor false for every partial computable function.

In mathematics, physics, and engineering, a vector space is a set of objects called vectors, which may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but some vector spaces have scalar multiplication by complex numbers or, generally, by a scalar from any mathematic field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. To specify whether the scalars in a particular vector space are real numbers or complex numbers, the terms real vector space and complex vector space are often used.

Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm which, for any given Diophantine equation, can decide whether the equation has a solution with all unknowns taking integer values.

In mathematics, the Abel–Ruffini theorem states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, general means that the coefficients of the equation are viewed and manipulated as indeterminates.

Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations.

An instanton is a notion appearing in theoretical and mathematical physics. An instanton is a classical solution to equations of motion with a finite, non-zero action, either in quantum mechanics or in quantum field theory. More precisely, it is a solution to the equations of motion of the classical field theory on a Euclidean spacetime.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In mathematics, the homotopy principle is a very general way to solve partial differential equations (PDEs), and more generally partial differential relations (PDRs). The h-principle is good for underdetermined PDEs or PDRs, such as occur in the immersion problem, isometric immersion problem, fluid dynamics, and other areas.

In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories.

Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.

In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In number theory, Fermat's Last Theorem states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n = 1 and n = 2 have been known since antiquity to have infinitely many solutions.

Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Andrew Wiles of a special case of the modularity theorem for elliptic curves. Together with Ribet's theorem, it provides a proof for Fermat's Last Theorem. Both Fermat's Last Theorem and the modularity theorem were almost universally considered inaccessible to proof by contemporaneous mathematicians, meaning that they were believed to be impossible to prove using current knowledge.

In mathematics, the Pythagorean theorem, or Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse is equal to the sum of the areas of the squares on the other two sides. This theorem can be written as an equation relating the lengths of the sides a, b and c, often called the Pythagorean equation:

The Millennium Prize Problems are seven unsolved problems in mathematics that were stated by the Clay Mathematics Institute on May 24, 2000. The problems are the Birch and Swinnerton-Dyer conjecture, Hodge conjecture, Navier–Stokes existence and smoothness, P versus NP problem, Poincaré conjecture, Riemann hypothesis, and Yang–Mills existence and mass gap. A correct solution to any of the problems results in a US\$1 million prize being awarded by the institute to the discoverer(s).

In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.

In algebraic geometry, the main theorem of elimination theory states that every projective scheme is proper. A version of this theorem predates the existence of scheme theory. It can be stated, proved, and applied in the following more classical setting. Let k be a field, denote by the n-dimensional projective space over k. The main theorem of elimination theory is the statement that for any n and any algebraic variety V defined over k, the projection map sends Zariski-closed subsets to Zariski-closed subsets.

## References

1. Weisstein, Eric W. "Trivial". mathworld.wolfram.com. Retrieved 2019-12-14.
2. "Mathwords: Trivial". www.mathwords.com. Retrieved 2019-12-14.
3. Ayto, John (1990). Dictionary of word origins. University of Texas Press. p. 542. ISBN   1-55970-214-1. OCLC   33022699.
4. Zachmanoglou, E. C.; Thoe, Dale W. (1986). Introduction to Partial Differential Equations with Applications. p. 309. ISBN   9780486652511.
5. Chartrand, Gary; Polimeni, Albert D.; Zhang, Ping (2008). (2nd ed.). Boston: Pearson/Addison Wesley. p.  68. ISBN   978-0-3-2139053-0.
6. Yan, Song Y. (2002). Number Theory for Computing (2nd, illustrated ed.). Berlin: Springer. p. 250. ISBN   3-540-43072-5.
7. Jeffrey, Alan (2004). Mathematics for Engineers and Scientists (Sixth ed.). CRC Press. p. 502. ISBN   1-58488-488-6.