Vector logic [1] [2] is an algebraic model of elementary logic based on matrix algebra. Vector logic assumes that the truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators. "Vector logic" has also been used to refer to the representation of classical propositional logic as a vector space, [3] [4] in which the unit vectors are propositional variables. Predicate logic can be represented as a vector space of the same type in which the axes represent the predicate letters and . [5] In the vector space for propositional logic the origin represents the false, F, and the infinite periphery represents the true, T, whereas in the space for predicate logic the origin represents "nothing" and the periphery represents the flight from nothing, or "something".
Classic binary logic is represented by a small set of mathematical functions depending on one (monadic) or two (dyadic) variables. In the binary set, the value 1 corresponds to true and the value 0 to false . A two-valued vector logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized real-valued column vectors s and n, hence:
(where is an arbitrary natural number, and "normalized" means that the length of the vector is 1; usually s and n are orthogonal vectors). This correspondence generates a space of vector truth-values: V2 = {s,n}. The basic logical operations defined using this set of vectors lead to matrix operators.
The operations of vector logic are based on the scalar product between q-dimensional column vectors: : the orthonormality between vectors s and n implies that if , and if , where .
The monadic operators result from the application , and the associated matrices have q rows and q columns. The two basic monadic operators for this two-valued vector logic are the identity and the negation:
The 16 two-valued dyadic operators correspond to functions of the type ; the dyadic matrices have q2 rows and q columns. The matrices that execute these dyadic operations are based on the properties of the Kronecker product. Two properties of this product are essential for the formalism of vector logic:
Using these properties, expressions for dyadic logic functions can be obtained:
The matrices S and P correspond to the Sheffer (NAND) and the Peirce (NOR) operations, respectively:
Here are numerical examples of some basic logical gates implemented as matrices for two different sets of 2-dimensional orthonormal vectors for s and n.
Set 1:
In this case the identity and negation operators are the identity and anti-diagonal identity matrices:,
and the matrices for conjunction, disjunction and implication are
respectively.
Set 2:
Here the identity operator is the identity matrix, but the negation operator is no longer the anti-diagonal identity matrix :
The resulting matrices for conjunction, disjunction and implication are:
respectively.
In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgan's law: p∧q≡¬(¬p∨¬q), and its dual: p∨q≡¬(¬p∧¬q)). For the two-valued vector logic this law is also verified:
The Kronecker product implies the following factorization:
Then it can be proved that in the two-dimensional vector logic the De Morgan's law is a law involving operators, and not only a law concerning operations: [6]
In the classical propositional calculus, the law of contraposition p → q ≡ ¬q → ¬p is proved because the equivalence holds for all the possible combinations of truth-values of p and q. [7] Instead, in vector logic, the law of contraposition emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what follows:
This result is based in the fact that D, the disjunction matrix, represents a commutative operation.
Many-valued logic was developed by many researchers, particularly by Jan Łukasiewicz and allows extending logical operations to truth-values that include uncertainties. [8] In the case of two-valued vector logic, uncertainties in the truth values can be introduced using vectors with s and n weighted by probabilities.
Let , with be this kind of "probabilistic" vectors. Here, the many-valued character of the logic is introduced a posteriori via the uncertainties introduced in the inputs. [1]
The outputs of this many-valued logic can be projected on scalar functions and generate a particular class of probabilistic logic with similarities with the many-valued logic of Reichenbach. [9] [10] [11] Given two vectors and and a dyadic logical matrix , a scalar probabilistic logic is provided by the projection over vector s:
Here are the main results of these projections:
The associated negations are:
If the scalar values belong to the set {0, 1/2, 1}, this many-valued scalar logic is for many of the operators almost identical to the 3-valued logic of Łukasiewicz. Also, it has been proved that when the monadic or dyadic operators act over probabilistic vectors belonging to this set, the output is also an element of this set. [6]
This operator was originally defined for qubits in the framework of quantum computing. [12] [13] In vector logic, this operator can be extended for arbitrary orthonormal truth values. [2] [14] There are, in fact, two square roots of NOT:
with . and are complex conjugates: , and note that , and . Another interesting point is the analogy with the two square roots of -1. The positive root corresponds to , and the negative root corresponds to ; as a consequence, .
Early attempts to use linear algebra to represent logic operations can be referred to Peirce and Copilowish, [15] particularly in the use of logical matrices to interpret the calculus of relations.
The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors. [16] [17] Vector logic is a direct translation into a matrix–vector formalism of the classical Boolean polynomials. [18] This kind of formalism has been applied to develop a fuzzy logic in terms of complex numbers. [19] Other matrix and vector approaches to logical calculus have been developed in the framework of quantum physics, computer science and optics. [20] [21]
The Indian biophysicist G.N. Ramachandran developed a formalism using algebraic matrices and vectors to represent many operations of classical Jain logic known as Syad and Saptbhangi; see Indian logic. [22] It requires independent affirmative evidence for each assertion in a proposition, and does not make the assumption for binary complementation.
George Boole established the development of logical operations as polynomials. [18] For the case of monadic operators (such as identity or negation), the Boolean polynomials look as follows:
The four different monadic operations result from the different binary values for the coefficients. Identity operation requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1. For the 16 dyadic operators, the Boolean polynomials are of the form:
The dyadic operations can be translated to this polynomial format when the coefficients f take the values indicated in the respective truth tables. For instance: the NAND operation requires that:
These Boolean polynomials can be immediately extended to any number of variables, producing a large potential variety of logical operators. In vector logic, the matrix-vector structure of logical operators is an exact translation to the format of linear algebra of these Boolean polynomials, where the x and 1−x correspond to vectors s and n respectively (the same for y and 1−y). In the example of NAND, f(1,1)=n and f(1,0)=f(0,1)=f(0,0)=s and the matrix version becomes:
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In mathematics, the tensor product of two vector spaces V and W is a vector space to which is associated a bilinear map that maps a pair to an element of denoted .
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.
In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.
In mathematics, the exterior algebra or Grassmann algebra of a vector space is an associative algebra that contains which has a product, called exterior product or wedge product and denoted with , such that for every vector in The exterior algebra is named after Hermann Grassmann, and the names of the product come from the "wedge" symbol and the fact that the product of two elements of are "outside"
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of , and the diagonal entries of are the corresponding eigenvalues of ; with respect to this eigenvector basis, is represented by .
In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
In mathematics, and specifically differential geometry, a connection form is a manner of organizing the data of a connection using the language of moving frames and differential forms.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers that describes the vector in terms of a particular ordered basis. An easy example may be a position such as in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.
In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They provide a mathematical abstraction of real-world quantum computers. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata.