Minimalist grammar

Last updated

Minimalist grammars are a class of formal grammars that aim to provide a more rigorous, usually proof-theoretic, formalization of Chomskyan Minimalist program than is normally provided in the mainstream Minimalist literature. A variety of particular formalizations exist, most of them developed by Edward Stabler, Alain Lecomte, Christian Retoré, or combinations thereof.

Contents

Lecomte and Retoré's extensions of the Lambek Calculus

Lecomte and Retoré (2001) [1] introduce a formalism that modifies that core of the Lambek Calculus to allow for movement-like processes to be described without resort to the combinatorics of Combinatory categorial grammar. The formalism is presented in proof-theoretic terms. Differing only slightly in notation from Lecomte and Retoré (2001), we can define a minimalist grammar as a 3-tuple , where is a set of "categorial" features, is a set of "functional" features (which come in two flavors, "weak", denoted simply , and "strong", denoted ), and is a set of lexical atoms, denoted as pairs , where is some phonological/orthographic content, and is a syntactic type defined recursively as follows:

all features in and are (atomic) types, and
if and are types, so are , , and .

We can now define 6 inference rules:

, for all
, for all

The first rule merely makes it possible to use lexical items with no extra assumptions. The second rule is just a means of introducing assumptions into the derivation. The third and fourth rules just perform directional feature checking, combining the assumptions required to build the subparts that are being combined. The entropy rule presumably allows the ordered sequents to be broken up into unordered sequents. And finally, the last rule implements "movement" by means of assumption elimination.

The last rule can be given a number of different interpretations in order to fully mimic movement of the normal sort found in the Minimalist Program. The account given by Lecomte and Retoré (2001) is that if one of the product types is a strong functional feature, then the phonological/orthographic content associated with that type on the right is substituted with the content of the a, and the other is substituted with the empty string; whereas if neither is strong, then the phonological/orthographic content is substituted for the category feature, and the empty string is substituted for the weak functional feature. That is, we can rephrase the rule as two sub-rules as follows:

where
where

Another alternative would be to construct pairs in the /E and \E steps, and use the rule as given, substituting the phonological/orthographic content a into the highest of the substitution positions, and the empty string in the rest of the positions. This would be more in line with the Minimalist Program, given that multiple movements of an item are possible, where only the highest position is "spelled out".

Example

As a simple example of this system, we can show how to generate the sentence who did John see with the following toy grammar:

Let , where L contains the following words:

The proof for the sentence who did John see is therefore:

Related Research Articles

<span class="mw-page-title-main">Pushdown automaton</span> Type of automaton

In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Fabry–Pérot interferometer</span>

In optics, a Fabry–Pérot interferometer (FPI) or etalon is an optical cavity made from two parallel reflecting surfaces. Optical waves can pass through the optical cavity only when they are in resonance with it. It is named after Charles Fabry and Alfred Perot, who developed the instrument in 1899. Etalon is from the French étalon, meaning "measuring gauge" or "standard".

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In quantum field theory, Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops. They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement, where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory. Wilson loops fall into the broader class of loop operators, with some other notable examples being the 't Hooft loops, which are magnetic duals to Wilson loops, and Polyakov loops, which are the thermal version of Wilson loops.

Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.

In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.

In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).

In the theory of special functions in mathematics, the Horn functions are the 34 distinct convergent hypergeometric series of order two, enumerated by Horn (1931). They are listed in. B. C. Carlson revealed a problem with the Horn function classification scheme. The total 34 Horn functions can be further categorised into 14 complete hypergeometric functions and 20 confluent hypergeometric functions. The complete functions, with their domain of convergence, are:

In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

The Herschel–Bulkley fluid is a generalized model of a non-Newtonian fluid, in which the strain experienced by the fluid is related to the stress in a complicated, non-linear way. Three parameters characterize this relationship: the consistency k, the flow index n, and the yield shear stress . The consistency is a simple constant of proportionality, while the flow index measures the degree to which the fluid is shear-thinning or shear-thickening. Ordinary paint is one example of a shear-thinning fluid, while oobleck provides one realization of a shear-thickening fluid. Finally, the yield stress quantifies the amount of stress that the fluid may experience before it yields and begins to flow.

<span class="mw-page-title-main">Deformation (physics)</span> Transformation of a body from a reference configuration to a current configuration

In physics, deformation is the continuum mechanics transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are Eigenforms of the hyperbolic Laplace Operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to the modular forms the Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958. It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection. The process can describe the probabilistic dynamics in a finite population of constant size N in which two alleles A and B are competing for dominance. The two alleles are considered to be true replicators.

In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.

In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus, developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century.

In solid mechanics, the linear stability analysis of an elastic solution is studied using the method of incremental deformations superposed on finite deformations. The method of incremental deformation can be used to solve static, quasi-static and time-dependent problems. The governing equations of the motion are ones of the classical mechanics, such as the conservation of mass and the balance of linear and angular momentum, which provide the equilibrium configuration of the material. The main corresponding mathematical framework is described in the main Raymond Ogden's book Non-linear elastic deformations and in Biot's book Mechanics of incremental deformations, which is a collection of his main papers.

References

  1. Lecomte, A., Retoré, C. (2001). "Extending Lambek Grammars: A Logical Account of Minimalist Grammars". Proc. 39th Ann. Meeting of the Association for Computational Linguistics (PDF). pp. 362–369.{{cite book}}: CS1 maint: multiple names: authors list (link)

Further reading