This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
An L-system or Lindenmayer system is a parallel rewriting system and a type of formal grammar. An L-system consists of an alphabet of symbols that can be used to make strings, a collection of production rules that expand each symbol into some larger string of symbols, an initial "axiom" string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems were introduced and developed in 1968 by Aristid Lindenmayer, a Hungarian theoretical biologist and botanist at the University of Utrecht. [1] Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes of plant development. L-systems have also been used to model the morphology of a variety of organisms [2] and can be used to generate self-similar fractals.
As a biologist, Lindenmayer worked with yeast and filamentous fungi and studied the growth patterns of various types of bacteria, such as the cyanobacteria Anabaena catenula . Originally, the L-systems were devised to provide a formal description of the development of such simple multicellular organisms, and to illustrate the neighbourhood relationships between plant cells. Later on, this system was extended to describe higher plants and complex branching structures.
The recursive nature of the L-system rules leads to self-similarity and thereby, fractal-like forms are easy to describe with an L-system. Plant models and natural-looking organic forms are easy to define, as by increasing the recursion level the form slowly 'grows' and becomes more complex. Lindenmayer systems are also popular in the generation of artificial life.
L-system grammars are very similar to the semi-Thue grammar (see Chomsky hierarchy). L-systems are now commonly known as parametric L systems, defined as a tuple
where
The rules of the L-system grammar are applied iteratively starting from the initial state. As many rules as possible are applied simultaneously, per iteration. The fact that each iteration employs as many rules as possible differentiates an L-system from a formal language generated by a formal grammar, which applies only one rule per iteration. If the production rules were to be applied only one at a time, one would quite simply generate a string in a language, and all such sequences of applications would produce the language specified by the grammar. There are some strings in some languages, however, that cannot be generated if the grammar is treated as an L-system rather than a language specification. For example, [3] suppose there is a rule S→SS in a grammar. If productions are done one at a time, then starting from S, we can get first SS, and then, applying the rule again, SSS. However, if all applicable rules are applied at every step, as in an L-system, then we cannot get this sentential form. Instead, the first step would give us SS, but the second would apply the rule twice, giving us SSSS. Thus, the set of strings produced by an L-systems from a given grammar is a subset of the formal language defined by the grammar, and if we take a language to be defined as a set of strings, this means that a given L-system is effectively a subset of the formal language defined by the L-system's grammar.
An L-system is context-free if each production rule refers only to an individual symbol and not to its neighbours. Context-free L-systems are thus specified by a context-free grammar. If a rule depends not only on a single symbol but also on its neighbours, it is termed a context-sensitive L-system.
If there is exactly one production for each symbol, then the L-system is said to be deterministic (a deterministic context-free L-system is popularly called a D0L system ). If there are several, and each is chosen with a certain probability during each iteration, then it is a stochastic L-system.
Using L-systems for generating graphical images requires that the symbols in the model refer to elements of a drawing on the computer screen. For example, the program Fractint uses turtle graphics (similar to those in the Logo programming language) to produce screen images. It interprets each constant in an L-system model as a turtle command.
Lindenmayer's original L-system for modelling the growth of algae.
which produces:
n=0: A start (axiom/initiator) / \ n=1: A B the initial single A spawned into AB by rule (A → AB), rule (B → A) couldn't be applied /| \ n=2: A B A former string AB with all rules applied, A spawned into AB again, former B turned into A / | | | \ n=3: A B A A B note all A's producing a copy of themselves in the first place, then a B, which turns ... / | | | \ | \ \ n=4: A B A A B A B A ... into an A one generation later, starting to spawn/repeat/recurse then
The result is the sequence of Fibonacci words. If we count the length of each string, we obtain the famous Fibonacci sequence of numbers (skipping the first 1, due to our choice of axiom):
If we would like to not skip the first 1, we can use axiom B. That would place B node before the topmost node (A) of the graph above.
For each string, if we count the k-th position from the left end of the string, the value is determined by whether a multiple of the golden ratio falls within the interval . The ratio of A to B likewise converges to the golden mean.
This example yields the same result (in terms of the length of each string, not the sequence of As and Bs) if the rule (A → AB) is replaced with (A → BA), except that the strings are mirrored.
This sequence is a locally catenative sequence because , where is the n-th generation.
The shape is built by recursively feeding the axiom through the production rules. Each character of the input string is checked against the rule list to determine which character or string to replace it with in the output string. In this example, a '1' in the input string becomes '11' in the output string, while '[' remains the same. Applying this to the axiom of '0', we get:
axiom: | 0 |
1st recursion: | 1[0]0 |
2nd recursion: | 11[1[0]0]1[0]0 |
3rd recursion: | 1111[11[1[0]0]1[0]0]11[1[0]0]1[0]0 |
... |
We can see that this string quickly grows in size and complexity. This string can be drawn as an image by using turtle graphics, where each symbol is assigned a graphical operation for the turtle to perform. For example, in the sample above, the turtle may be given the following instructions:
The push and pop refer to a LIFO stack (more technical grammar would have separate symbols for "push position" and "turn left"). When the turtle interpretation encounters a '[', the current position and angle are saved, and are then restored when the interpretation encounters a ']'. If multiple values have been "pushed," then a "pop" restores the most recently saved values. Applying the graphical rules listed above to the earlier recursion, we get:
Let A mean "draw forward" and B mean "move forward".
This produces the famous Cantor's fractal set on a real straight line R.
A variant of the Koch curve which uses only right angles.
Here, F means "draw forward", + means "turn left 90°", and − means "turn right 90°" (see turtle graphics).
The Sierpinski triangle drawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
It is also possible to approximate the Sierpinski triangle using a Sierpiński arrowhead curve L-system.
Here, A and B both mean "draw forward", + means "turn left by angle", and − means "turn right by angle" (see turtle graphics).
The dragon curve drawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
First you need to initialize an empty stack. This follows the LIFO (Last in, First Out) method to add and remove elements. Here, F means "draw forward", − means "turn right 25°", and + means "turn left 25°". X does not correspond to any drawing action and is used to control the evolution of the curve. The square bracket "[" corresponds to saving the current values for position and angle, so you push the position and angle to the top of the stack, when the "]" token is encountered, pop the stack and reset the position and angle. Every "[" comes before every "]" token.
A number of elaborations on this basic L-system technique have been developed which can be used in conjunction with each other. Among these are stochastic grammars, context sensitive grammars, and parametric grammars.
The grammar model we have discussed thus far has been deterministic—that is, given any symbol in the grammar's alphabet, there has been exactly one production rule, which is always chosen, and always performs the same conversion. One alternative is to specify more than one production rule for a symbol, giving each a probability of occurring. For example, in the grammar of Example 2, we could change the rule for rewriting "0" from:
to a probabilistic rule:
Under this production, whenever a "0" is encountered during string rewriting, there would be a 50% chance it would behave as previously described, and a 50% chance it would not change during production. When a stochastic grammar is used in an evolutionary context, it is advisable to incorporate a random seed into the genotype, so that the stochastic properties of the image remain constant between generations.
A context sensitive production rule looks not only at the symbol it is modifying, but the symbols on the string appearing before and after it. For instance, the production rule:
transforms "a" to "aa", but only if the "a" occurs between a "b" and a "c" in the input string:
As with stochastic productions, there are multiple productions to handle symbols in different contexts. If no production rule can be found for a given context, the identity production is assumed, and the symbol does not change on transformation. If context-sensitive and context-free productions both exist within the same grammar, the context-sensitive production is assumed to take precedence when it is applicable.
In a parametric grammar, each symbol in the alphabet has a parameter list associated with it. A symbol coupled with its parameter list is called a module, and a string in a parametric grammar is a series of modules. An example string might be:
The parameters can be used by the drawing functions, and also by the production rules. The production rules can use the parameters in two ways: first, in a conditional statement determining whether the rule will apply, and second, the production rule can modify the actual parameters. For example, look at:
The module a(x,y) undergoes transformation under this production rule if the conditional x=0 is met. For example, a(0,2) would undergo transformation, and a(1,2) would not.
In the transformation portion of the production rule, the parameters as well as entire modules can be affected. In the above example, the module b(x,y) is added to the string, with initial parameters (2,3). Also, the parameters of the already existing module are transformed. Under the above production rule,
Becomes
as the "x" parameter of a(x,y) is explicitly transformed to a "1" and the "y" parameter of a is incremented by one.
Parametric grammars allow line lengths and branching angles to be determined by the grammar, rather than the turtle interpretation methods. Also, if age is given as a parameter for a module, rules can change depending on the age of a plant segment, allowing animations of the entire life-cycle of the tree to be created.
The bi-directional model explicitly separates the symbolic rewriting system from the shape assignment. For example, the string rewriting process in the Example 2 (Fractal tree) is independent on how graphical operations are assigned to the symbols. In other words, an infinite number of draw methods are applicable to a given rewriting system.
The bi-directional model consists of 1) a forward process constructs the derivation tree with production rules, and 2) a backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse-derivation step involves essential geometric-topological reasoning. With this bi-directional framework, design constraints and objectives are encoded in the grammar-shape translation. In architectural design applications, the bi-directional grammar features consistent interior connectivity and a rich spatial hierarchy. [4]
There are many open problems involving studies of L-systems. For example:
L-systems on the real line R:
Well-known L-systems on a plane R2 are:
In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
A parameter, generally, is any characteristic that can help in defining or classifying a particular system. That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.
Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances, it is often done in such a way that no infinite loop or infinite chain of references can occur.
In computer science, an LL parser is a top-down parser for a restricted context-free language. It parses the input from Left to right, performing Leftmost derivation of the sentence.
In computer graphics, turtle graphics are vector graphics using a relative cursor upon a Cartesian plane. Turtle graphics is a key feature of the Logo programming language.
A dragon curve is any member of a family of self-similar fractal curves, which can be approximated by recursive methods such as Lindenmayer systems. The dragon curve is probably most commonly thought of as the shape that is generated from repeatedly folding a strip of paper in half, although there are other curves that are called dragon curves that are generated differently.
Sierpiński curves are a recursively defined sequence of continuous closed plane fractal curves discovered by Wacław Sierpiński, which in the limit completely fill the unit square: thus their limit curve, also called the Sierpiński curve, is an example of a space-filling curve.
Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.
In computer science, a parsing expression grammar (PEG) is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language. The formalism was introduced by Bryan Ford in 2004 and is closely related to the family of top-down parsing languages introduced in the early 1970s. Syntactically, PEGs also look similar to context-free grammars (CFGs), but they have a different interpretation: the choice operator selects the first match in PEG, while it is ambiguous in CFG. This is closer to how string recognition tends to be done in practice, e.g. by a recursive descent parser.
In mathematics, the Lévy C curve is a self-similar fractal curve that was first described and whose differentiability properties were analysed by Ernesto Cesàro in 1906 and Georg Faber in 1910, but now bears the name of French mathematician Paul Lévy, who was the first to describe its self-similarity properties as well as to provide a geometrical construction showing it as a representative curve in the same class as the Koch curve. It is a special case of a period-doubling curve, a de Rham curve.
The Hilbert curve is a continuous fractal space-filling curve first described by the German mathematician David Hilbert in 1891, as a variant of the space-filling Peano curves discovered by Giuseppe Peano in 1890.
Primitive recursive arithmetic (PRA) is a quantifier-free formalization of the natural numbers. It was first proposed by Norwegian mathematician Skolem (1923), as a formalization of his finitistic conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε0, which is the proof-theoretic ordinal of Peano arithmetic. PRA's proof theoretic ordinal is ωω, where ω is the smallest transfinite ordinal. PRA is sometimes called Skolem arithmetic, although that has another meaning, see Skolem arithmetic.
In formal languages, terminal and nonterminal symbols are the lexical elements used in specifying the production rules constituting a formal grammar. Terminal symbols are the elementary symbols of the language defined as part of a formal grammar. Nonterminal symbols are replaced by groups of terminal symbols according to the production rules.
Indexed grammars are a generalization of context-free grammars in that nonterminals are equipped with lists of flags, or index symbols. The language produced by an indexed grammar is called an indexed language.
An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
A Moore curve is a continuous fractal space-filling curve which is a variant of the Hilbert curve. Precisely, it is the loop version of the Hilbert curve, and it may be thought as the union of four copies of the Hilbert curves combined in such a way to make the endpoints coincide.
A formal grammar describes which strings from an alphabet of a formal language are valid according to the language's syntax. A grammar does not describe the meaning of the strings or what can be done with them in whatever context—only their form. A formal grammar is defined as a set of production rules for such strings in a formal language.
Fractal-generating software is any type of graphics software that generates images of fractals. There are many fractal generating programs available, both free and commercial. Mobile apps are available to play or tinker with fractals. Some programmers create fractal software for themselves because of the novelty and because of the challenge in understanding the related mathematics. The generation of fractals has led to some very large problems for pure mathematics.
In mathematical logic, a term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. This is analogous to natural language, where a noun phrase refers to an object and a whole sentence refers to a fact.