In computer science, a straight-line program is, informally, a program that does not contain any loop or any test, and is formed by a sequence of steps that apply each an operation to previously computed elements.
This article is devoted to the case where the allowed operations are the operations of a group, that is multiplication and inversion. More specifically a straight-line program (SLP) for a finite group G = ⟨S⟩ is a finite sequence L of elements of G such that every element of L either belongs to S, is the inverse of a preceding element, or the product of two preceding elements. An SLP L is said to compute a group element g ∈ G if g ∈ L, where g is encoded by a word in S and its inverses.
Intuitively, an SLP computing some g ∈ G is an efficient way of storing g as a group word over S; observe that if g is constructed in i steps, the word length of g may be exponential in i, but the length of the corresponding SLP is linear in i. This has important applications in computational group theory, by using SLPs to efficiently encode group elements as words over a given generating set.
Straight-line programs were introduced by Babai and Szemerédi in 1984 [1] as a tool for studying the computational complexity of certain matrix group properties. Babai and Szemerédi prove that every element of a finite group G has an SLP of length O(log2|G|) in every generating set.
An efficient solution to the constructive membership problem is crucial to many group-theoretic algorithms. It can be stated in terms of SLPs as follows. Given a finite group G = ⟨S⟩ and g ∈ G, find a straight-line program computing g over S. The constructive membership problem is often studied in the setting of black box groups. The elements are encoded by bit strings of a fixed length. Three oracles are provided for the group-theoretic functions of multiplication, inversion, and checking for equality with the identity. A black box algorithm is one which uses only these oracles. Hence, straight-line programs for black box groups are black box algorithms.
Explicit straight-line programs are given for a wealth of finite simple groups in the online ATLAS of Finite Groups.
Let G be a finite group and let S be a subset of G. A sequence L = (g1,...,gm) of elements of G is a straight-line program over S if each gi can be obtained by one of the following three rules:
The straight-line costc(g|S) of an element g ∈ G is the length of a shortest straight-line program over S computing g. The cost is infinite if g is not in the subgroup generated by S.
A straight-line program is similar to a derivation in predicate logic. The elements of S correspond to axioms and the group operations correspond to the rules of inference.
Let G be a finite group and let S be a subset of G. A straight-line program of length m over S computing some g ∈ G is a sequence of expressions (w1,...,wm) such that for each i, wi is a symbol for some element of S, or wi = (wj,-1) for some j < i, or wi = (wj,wk) for some j,k < i, such that wm takes upon the value g when evaluated in G in the obvious manner.
The original definition appearing in [2] requires that G =⟨S⟩. The definition presented above is a common generalisation of this.
From a computational perspective, the formal definition of a straight-line program has some advantages. Firstly, a sequence of abstract expressions requires less memory than terms over the generating set. Secondly, it allows straight-line programs to be constructed in one representation of G and evaluated in another. This is an important feature of some algorithms. [2]
The dihedral group D12 is the group of symmetries of a hexagon. It can be generated by a 60 degree rotation ρ and one reflection λ. The leftmost column of the following is a straight-line program for λρ3:
|
|
In S6, the group of permutations on six letters, we can take α=(1 2 3 4 5 6) and β=(1 2) as generators. The leftmost column here is an example of a straight-line program to compute (1 2 3)(4 5 6):
|
|
|
Short descriptions of finite groups. Straight-line programs can be used to study compression of finite groups via first-order logic. They provide a tool to construct "short" sentences describing G (i.e. much shorter than |G|). In more detail, SLPs are used to prove that every finite simple group has a first-order description of length O(log|G|), and every finite group G has a first-order description of length O(log3|G|). [3]
Straight-line programs computing generating sets for maximal subgroups of finite simple groups. The online ATLAS of Finite Group Representations [4] provides abstract straight-line programs for computing generating sets of maximal subgroups for many finite simple groups.
Example: The group Sz(32), belonging to the infinite family of Suzuki groups, has rank 2 via generators a and b, where a has order 2, b has order 4, ab has order 5, ab2 has order 25 and abab2ab3 has order 25. The following is a straight-line program that computes a generating set for a maximal subgroup E32·E32⋊C31. This straight-line program can be found in the online ATLAS of Finite Group Representations.
|
|
The reachability theorem states that, given a finite group G generated by S, each g ∈ G has a maximum cost of (1 + lg|G|)2. This can be understood as a bound on how hard it is to generate a group element from the generators.
Here the function lg(x) is an integer-valued version of the logarithm function: for k≥1 let lg(k) = max{r : 2r ≤ k}.
The idea of the proof is to construct a set Z = {z1,...,zs} that will work as a new generating set (s will be defined during the process). It is usually larger than S, but any element of G can be expressed as a word of length at most 2|Z| over Z. The set Z is constructed by inductively defining an increasing sequence of sets K(i).
Let K(i) = {z1α1·z2α2·...·ziαi : αj ∈ {0,1}}, where zi is the group element added to Z at the i-th step. Let c(i) denote the length of a shortest straight-line program that contains Z(i) = {z1,...,zi}. Let K(0) = {1G} and c(0)=0. We define the set Z recursively:
By this process, Z is defined in a way so that any g ∈ G can be written as an element of K(i)−1K(i), effectively making it easier to generate from Z.
We now need to verify the following claim to ensure that the process terminates within lg(|G|) many steps:
Claim 1 — If i < s then |K(i+1)| = 2|K(i)|.
It is immediate that |K(i+1)| ≤ 2|K(i)|. Now suppose for a contradiction that |K(i+1)| < 2|K(i)|. By the pigeonhole principle there are k1,k2 ∈ K(i+1) with k1 = z1α1·z2α2·...·zi+1αi+1 = z1β1·z2β2·...·zi+1βi+1 = k2 for some αj,βj ∈ {0,1}. Let r be the largest integer such that αr ≠ βr. Assume WLOG that αr = 1. It follows that zr = zp−αp·zp-1−αp-1·...·z1−α1·z1β1·z2β2·...·zqβq, with p,q < r. Hence zr ∈ K(r−1)−1K(r − 1), a contradiction.
The next claim is used to show that the cost of every group element is within the required bound.
Claim 2 — c(i) ≤ i2 − i.
Since c(0)=0 it suffices to show that c(i+1) - c(i) ≤ 2i. The Cayley graph of G is connected and if i < s, K(i)−1K(i) ≠ G, then there is an element of the form g1·g2 ∈ G \ K(i)−1K(i) with g1 ∈ K(i)−1K(i) and g2 ∈ S.
It takes at most 2i steps to generate g1 ∈ K(i)−1K(i). There is no point in generating the element of maximum length, since it is the identity. Hence 2i −1 steps suffice. To generate g1·g2 ∈ G\K(i)−1K(i), 2i steps are sufficient.
We now finish the theorem. Since K(s)−1K(s) = G, any g ∈ G can be written in the form k−1
1·k2 with k−1
1,k2 ∈ K(s). By Corollary 2, we need at most s2 − s steps to generate Z(s) = Z, and no more than 2s − 1 steps to generate g from Z(s).
Therefore c(g|S) ≤ s2 + s − 1 ≤ lg2|G| + lg|G| − 1 ≤ (1 + lg|G|)2.
In abstract algebra, the center of a group G is the set of elements that commute with every element of G. It is denoted Z(G), from German Zentrum, meaning center. In set-builder notation,
In mathematics, specifically group theory, given a prime number p, a p-group is a group in which the order of every element is a power of p. That is, for each element g of a p-group G, there exists a nonnegative integer n such that the product of pn copies of g, and not fewer, is equal to the identity element. The orders of different elements may be different powers of p.
In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group defined over a finite set of symbols consists of the permutations that can be performed on the symbols. Since there are such permutation operations, the order of the symmetric group is .
In mathematics, specifically group theory, a subgroup H of a group G may be used to decompose the underlying set of G into disjoint, equal-size subsets called cosets. There are left cosets and right cosets. Cosets have the same number of elements (cardinality) as does H. Furthermore, H itself is both a left coset and a right coset. The number of left cosets of H in G is equal to the number of right cosets of H in G. This common value is called the index of H in G and is usually denoted by [G : H].
In mathematics, a presentation is one method of specifying a group. A presentation of a group G comprises a set S of generators—so that every element of the group can be written as a product of powers of some of these generators—and a set R of relations among those generators. We then say G has presentation
In abstract algebra, a generating set of a group is a subset of the group set such that every element of the group can be expressed as a combination of finitely many elements of the subset and their inverses.
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup.
In mathematics, specifically group theory, a nilpotent groupG is a group that has an upper central series that terminates with G. Equivalently, it has a central series of finite length or its lower central series terminates with {1}.
In mathematics, a modular form is a (complex) analytic function on the upper half-plane, , that satisfies:
In mathematics, in particular the theory of Lie algebras, the Weyl group of a root system Φ is a subgroup of the isometry group of that root system. Specifically, it is the subgroup which is generated by reflections through the hyperplanes orthogonal to the roots, and as such is a finite reflection group. In fact it turns out that most finite reflection groups are Weyl groups. Abstractly, Weyl groups are finite Coxeter groups, and are important examples of these.
In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem, and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing expander graphs.
In group theory, a word metric on a discrete group is a way to measure distance between any two elements of . As the name suggests, the word metric is a metric on , assigning to any two elements , of a distance that measures how efficiently their difference can be expressed as a word whose letters come from a generating set for the group. The word metric on G is very closely related to the Cayley graph of G: the word metric measures the length of the shortest path in the Cayley graph between two elements of G.
In mathematics, a Frobenius group is a transitive permutation group on a finite set, such that no non-trivial element fixes more than one point and some non-trivial element fixes a point. They are named after F. G. Frobenius.
In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture.
In mathematics, specifically in group theory, the direct product is an operation that takes two groups G and H and constructs a new group, usually denoted G × H. This operation is the group-theoretic analogue of the Cartesian product of sets and is one of several important notions of direct product in mathematics.
In mathematics, the Burnside ring of a finite group is an algebraic construction that encodes the different ways the group can act on finite sets. The ideas were introduced by William Burnside at the end of the nineteenth century. The algebraic ring structure is a more recent development, due to Solomon (1967).
In mathematics, a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups. The property of bounded generation is also closely related with the congruence subgroup problem.
In mathematics, an approximately finite-dimensional (AF) C*-algebra is a C*-algebra that is the inductive limit of a sequence of finite-dimensional C*-algebras. Approximate finite-dimensionality was first defined and described combinatorially by Ola Bratteli. Later, George A. Elliott gave a complete classification of AF algebras using the K0 functor whose range consists of ordered abelian groups with sufficiently nice order structure.
In the mathematical subject of group theory, the Grushko theorem or the Grushko–Neumann theorem is a theorem stating that the rank of a free product of two groups is equal to the sum of the ranks of the two free factors. The theorem was first obtained in a 1940 article of Grushko and then, independently, in a 1943 article of Neumann.
Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications. In contrast to non-commutative cryptography, the currently widely used public-key cryptosystems like RSA cryptosystem, Diffie–Hellman key exchange and elliptic curve cryptography are based on number theory and hence depend on commutative algebraic structures.