Catalecticant

Last updated

But the catalecticant of the biquadratic function of x, y was first brought into notice as an invariant by Mr Boole; and the discriminant of the quadratic function of x, y is identical with its catalecticant, as also with its Hessian. Meicatalecticizant would more completely express the meaning of that which, for the sake of brevity, I denominate the catalecticant.

Contents

Sylvester (1852), quoted by Miller (2010)

In mathematical invariant theory, the catalecticant of a form of even degree is a polynomial in its coefficients that vanishes when the form is a sum of an unusually small number of powers of linear forms. It was introduced by Sylvester (1852); see Miller (2010). The word catalectic refers to an incomplete line of verse, lacking a syllable at the end or ending with an incomplete foot.

Binary forms

The catalecticant of a binary form of degree 2n is a polynomial in its coefficients that vanishes when the binary form is a sum of at most n powers of linear forms ( Sturmfels 1993 ).

The catalecticant of a binary form can be given as the determinant of a catalecticant matrix( Eisenbud 1988 ), also called a Hankel matrix , that is a square matrix with constant (positive sloping) skew-diagonals, such as

Catalecticants of quartic forms

The catalecticant of a quartic form is the resultant of its second partial derivatives. For binary quartics the catalecticant vanishes when the form is a sum of two 4th powers. For a ternary quartic the catalecticant vanishes when the form is a sum of five 4th powers. For quaternary quartics the catalecticant vanishes when the form is a sum of nine 4th powers. For quinary quartics the catalecticant vanishes when the form is a sum of fourteen 4th powers. ( Elliott 1913 , p. 295)

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In the mathematics of a square matrix, the Wronskian is a determinant introduced by the Polish mathematician Józef Hoene-Wroński (1812). It is used in the study of differential equations, where it can sometimes show linear independence of a set of solutions.

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. For example, if we consider the action of the special linear group SLn on the space of n by n matrices by left multiplication, then the determinant is an invariant of this action because the determinant of A X equals the determinant of X, when A is in SLn.

In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852), who indirectly named them after Johann Friedrich Pfaff.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In the mathematical field of knot theory, the Arf invariant of a knot, named after Cahit Arf, is a knot invariant obtained from a quadratic form associated to a Seifert surface. If F is a Seifert surface of a knot, then the homology group H1(F, Z/2Z) has a quadratic form whose value is the number of full twists mod 2 in a neighborhood of an embedded circle representing an element of the homology group. The Arf invariant of this quadratic form is the Arf invariant of the knot.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In algebra, the hyperdeterminant is a generalization of the determinant. Whereas a determinant is a scalar valued function defined on an n × n square matrix, a hyperdeterminant is defined on a multidimensional array of numbers or tensor. Like a determinant, the hyperdeterminant is a homogeneous polynomial with integer coefficients in the components of the tensor. Many other properties of determinants generalize in some way to hyperdeterminants, but unlike a determinant, the hyperdeterminant does not have a simple geometric interpretation in terms of volumes.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In mathematical invariant theory, an invariant of a binary form is a polynomial in the coefficients of a binary form in two variables x and y that remains invariant under the special linear group acting on the variables x and y.

This page is a glossary of terms in invariant theory. For descriptions of particular invariant rings, see invariants of a binary form, symmetric polynomials. For geometric terms used in invariant theory see the glossary of classical algebraic geometry. Definitions of many terms used in invariant theory can be found in, ,, ,, ,, , and the index to the fourth volume of Sylvester's collected works includes many of the terms invented by him.

In mathematics, a ternary cubic form is a homogeneous degree 3 polynomial in three variables.

In mathematics, a ternary quartic form is a degree 4 homogeneous polynomial in three variables.

References