Syntactic predicate

Last updated

A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production. It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead. In their original implementation, syntactic predicates had the form “( α )?” and could only appear on the left edge of a production. The required syntactic condition α could be any valid context-free grammar fragment.

Contents

More formally, a syntactic predicate is a form of production intersection, used in parser specifications or in formal grammars. In this sense, the term predicate has the meaning of a mathematical indicator function. If p1 and p2, are production rules, the language generated by bothp1andp2 is their set intersection.

As typically defined or implemented, syntactic predicates implicitly order the productions so that predicated productions specified earlier have higher precedence than predicated productions specified later within the same decision. This conveys an ability to disambiguate ambiguous productions because the programmer can simply specify which production should match.

Parsing expression grammars (PEGs), invented by Bryan Ford, extend these simple predicates by allowing "not predicates" and permitting a predicate to appear anywhere within a production. Moreover, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space.

It is possible to support linear-time parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices. This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA (called "pred-LL(*)" parsing). [1]

Overview

Terminology

The term syntactic predicate was coined by Parr & Quong and differentiates this form of predicate from semantic predicates (also discussed). [2]

Syntactic predicates have been called multi-step matching, parse constraints, and simply predicates in various literature. (See References section below.) This article uses the term syntactic predicate throughout for consistency and to distinguish them from semantic predicates.

Formal closure properties

Bar-Hillel et al. [3] show that the intersection of two regular languages is also a regular language, which is to say that the regular languages are closed under intersection.

The intersection of a regular language and a context-free language is also closed, and it has been known at least since Hartmanis [4] that the intersection of two context-free languages is not necessarily a context-free language (and is thus not closed). This can be demonstrated easily using the canonical Type 1 language, :

Let  (Type 2) Let  (Type 2) Let 

Given the strings abcc, aabbc, and aaabbbccc, it is clear that the only string that belongs to both L1and L2 (that is, the only one that produces a non-empty intersection) is aaabbbccc.

Other considerations

In most formalisms that use syntactic predicates, the syntax of the predicate is noncommutative, which is to say that the operation of predication is ordered. For instance, using the above example, consider the following pseudo-grammar, where X ::= Y PRED Z is understood to mean: "Y produces X if and only if Y also satisfies predicate Z":

S    ::= a X X    ::= Y PRED Z Y    ::= a+ BNCN Z    ::= ANBN c+ BNCN ::= b [BNCN] c ANBN ::= a [ANBN] b

Given the string aaaabbbccc, in the case where Y must be satisfied first (and assuming a greedy implementation), S will generate aX and X in turn will generate aaabbbccc, thereby generating aaaabbbccc. In the case where Z must be satisfied first, ANBN will fail to generate aaaabbb, and thus aaaabbbccc is not generated by the grammar. Moreover, if either Y or Z (or both) specify any action to be taken upon reduction (as would be the case in many parsers), the order that these productions match determines the order in which those side-effects occur. Formalisms that vary over time (such as adaptive grammars) may rely on these side effects.

Examples of use

ANTLR

Parr & Quong [5] give this example of a syntactic predicate:

stat:(declaration)?declaration|expression;

which is intended to satisfy the following informally stated [6] constraints of C++:

  1. If it looks like a declaration, it is; otherwise
  2. if it looks like an expression, it is; otherwise
  3. it is a syntax error.

In the first production of rule stat, the syntactic predicate (declaration)? indicates that declaration is the syntactic context that must be present for the rest of that production to succeed. We can interpret the use of (declaration)? as "I am not sure if declaration will match; let me try it out and, if it does not match, I shall try the next alternative." Thus, when encountering a valid declaration, the rule declaration will be recognized twice—once as syntactic predicate and once during the actual parse to execute semantic actions.

Of note in the above example is the fact that any code triggered by the acceptance of the declaration production will only occur if the predicate is satisfied.

Canonical examples

The language can be represented in various grammars and formalisms as follows:

Parsing Expression Grammars
S&(A!b)a+B!cAaA?bBbB?c
§-Calculus

Using a bound predicate:

S → {A}B
A → X 'c+' X → 'a' [X] 'b' B → 'a+' Y Y → 'b' [Y] 'c'

Using two free predicates:

A → <'a+'>a <'b+'>b Ψ(ab)X <'c+'>c Ψ(bc)Y
X → 'a' [X] 'b' Y → 'b' [Y] 'c'
Conjunctive Grammars

(Note: the following example actually generates , but is included here because it is the example given by the inventor of conjunctive grammars. [7] ):

S → AB&DC A → aA | ε B → bBc | ε C → cC | ε D → aDb | ε
Perl 6 rules
rule S { <before <A> <!before b>> a+ <B> <!before c> }  rule A { a <A>? b }  rule B { b <B>? c } 

Parsers/formalisms using some form of syntactic predicate

Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates:

ANTLR (Parr & Quong)
As originally implemented, [2] syntactic predicates sit on the leftmost edge of a production such that the production to the right of the predicate is attempted if and only if the syntactic predicate first accepts the next portion of the input stream. Although ordered, the predicates are checked first, with parsing of a clause continuing if and only if the predicate is satisfied, and semantic actions only occurring in non-predicates. [5]
Augmented Pattern Matcher (Balmas)
Balmas refers to syntactic predicates as "multi-step matching" in her paper on APM. [8] As an APM parser parses, it can bind substrings to a variable, and later check this variable against other rules, continuing to parse if and only if that substring is acceptable to further rules.
Parsing expression grammars (Ford)
Ford's PEGs have syntactic predicates expressed as the and-predicate and the not-predicate. [9]
§-Calculus (Jackson)
In the §-Calculus, syntactic predicates are originally called simply predicates, but are later divided into bound and free forms, each with different input properties. [10]
Raku rules
Raku introduces a generalized tool for describing a grammar called rules, which are an extension of Perl 5's regular expression syntax. [11] Predicates are introduced via a lookahead mechanism called before, either with "<before ...>" or "<!before ...>" (that is: "not before"). Perl 5 also has such lookahead, but it can only encapsulate Perl 5's more limited regexp features.
ProGrammar (NorKen Technologies)
ProGrammar's GDL (Grammar Definition Language) makes use of syntactic predicates in a form called parse constraints. [12] ATTENTION NEEDED: This link is no longer valid!
Conjunctive and Boolean Grammars (Okhotin)
Conjunctive grammars, first introduced by Okhotin, [13] introduce the explicit notion of conjunction-as-predication. Later treatment of conjunctive and boolean grammars [14] is the most thorough treatment of this formalism to date.

Related Research Articles

<span class="mw-page-title-main">Context-free grammar</span> Type of formal grammar

In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form

In formal language theory, a context-free language (CFL) is a language generated by a context-free grammar (CFG).

<span class="mw-page-title-main">Formal language</span> Sequence of words formed by specific rules

In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar.

In computer science, an LL parser is a top-down parser for a restricted context-free language. It parses the input from Left to right, performing Leftmost derivation of the sentence.

A canonical LR parser is a type of bottom-up parsing algorithm used in computer science to analyze and process programming languages. It is based on the LR parsing technique, which stands for "left-to-right, rightmost derivation in reverse."

Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term parsing comes from Latin pars (orationis), meaning part.

In computer-based language recognition, ANTLR, or ANother Tool for Language Recognition, is a parser generator that uses a LL(*) algorithm for parsing. ANTLR is the successor to the Purdue Compiler Construction Tool Set (PCCTS), first developed in 1989, and is under active development. Its maintainer is Professor Terence Parr of the University of San Francisco.

Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.

In computer science, a parsing expression grammar (PEG) is a type of analytic formal grammar, i.e. it describes a formal language in terms of a set of rules for recognizing strings in the language. The formalism was introduced by Bryan Ford in 2004 and is closely related to the family of top-down parsing languages introduced in the early 1970s. Syntactically, PEGs also look similar to context-free grammars (CFGs), but they have a different interpretation: the choice operator selects the first match in PEG, while it is ambiguous in CFG. This is closer to how string recognition tends to be done in practice, e.g. by a recursive descent parser.

Conjunctive grammars are a class of formal grammars studied in formal language theory. They extend the basic type of grammars, the context-free grammars, with a conjunction operation. Besides explicit conjunction, conjunctive grammars allow implicit disjunction represented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction can be used, in particular, to specify intersection of languages. A further extension of conjunctive grammars known as Boolean grammars additionally allows explicit negation.

Boolean grammars, introduced by Okhotin, are a class of formal grammars studied in formal language theory. They extend the basic type of grammars, the context-free grammars, with conjunction and negation operations. Besides these explicit operations, Boolean grammars allow implicit disjunction represented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction and negation can be used, in particular, to specify intersection and complement of languages. An intermediate class of grammars known as conjunctive grammars allows conjunction and disjunction, but not negation.

Raku rules are the regular expression, string matching and general-purpose parsing facility of the Raku programming language, and are a core part of the language. Since Perl's pattern-matching constructs have exceeded the capabilities of formal regular expressions for some time, Raku documentation refers to them exclusively as regexes, distancing the term from the formal definition.

An adaptive grammar is a formal grammar that explicitly provides mechanisms within the formalism to allow its own production rules to be manipulated.

Combinatory categorial grammar (CCG) is an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures and is therefore a type of phrase structure grammar.

<span class="mw-page-title-main">Formal grammar</span> Structure of a formal language

A formal grammar describes which strings from an alphabet of a formal language are valid according to the language's syntax. A grammar does not describe the meaning of the strings or what can be done with them in whatever context—only their form. A formal grammar is defined as a set of production rules for such strings in a formal language.

Range concatenation grammar (RCG) is a grammar formalism developed by Pierre Boullier in 1998 as an attempt to characterize a number of phenomena of natural language, such as Chinese numbers and German word order scrambling, which are outside the bounds of the mildly context-sensitive languages.

<span class="mw-page-title-main">LL grammar</span> Type of a context-free grammar

In formal language theory, an LL grammar is a context-free grammar that can be parsed by an LL parser, which parses the input from Left to right, and constructs a Leftmost derivation of the sentence. A language that has an LL grammar is known as an LL language. These form subsets of deterministic context-free grammars (DCFGs) and deterministic context-free languages (DCFLs), respectively. One says that a given grammar or language "is an LL grammar/language" or simply "is LL" to indicate that it is in this class.

In linguistics, cross-serial dependencies occur when the lines representing the dependency relations between two series of words cross over each other. They are of particular interest to linguists who wish to determine the syntactic structure of natural language; languages containing an arbitrary number of them are non-context-free. By this fact, Dutch and Swiss-German have been proven to be non-context-free.

Dynamic Syntax (DS) is a grammar formalism and linguistic theory whose overall aim is to explain the real-time processes of language understanding and production, and describe linguistic structures as happening step-by-step over time. Under the DS approach, syntactic knowledge is understood as the ability to incrementally analyse the structure and content of spoken and written language in context and in real-time. While it posits representations similar to those used in Combinatory categorial grammars (CCG), it builds those representations left-to-right going word-by-word. Thus it differs from other syntactic models which generally abstract way from features of everyday conversation such as interruption, backtracking, and self-correction. Moreover, it differs from other approaches in that it does not postulate an independent level of syntactic structure over words.

Syntactic parsing is the automatic analysis of syntactic structure of natural language, especially syntactic relations and labelling spans of constituents. It is motivated by the problem of structural ambiguity in natural language: a sentence can be assigned multiple grammatical parses, so some kind of knowledge beyond computational grammar rules is needed to tell which parse is intended. Syntactic parsing is one of the important tasks in computational linguistics and natural language processing, and has been a subject of research since the mid-20th century with the advent of computers.

References

  1. Parr, Terence (2007). The Definitive ANTLR Reference: Building Domain-Specific Languages. The Pragmatic Programmers. p. 328. ISBN   978-3-540-63293-1.
  2. 1 2 Parr, Terence J.; Quong, Russell (October 1993). "Adding semantic and syntactic predicates to LL(k): Pred-LL(k)". Adding Semantic and Syntactic Predicates to LL(k) parsing: pred-LL(k). Army High Performance Computing Research Center Preprint No. 93-096. pp. 263–277. CiteSeerX   10.1.1.26.427 . doi:10.1007/3-540-57877-3_18 . Retrieved 26 August 2023.
  3. Bar-Hillel, Y.; Perles, M.; Shamir, E. (1961). "On formal properties of simple phrase structure grammars". Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung. 14 (2): 143–172..
  4. Hartmanis, Juris (1967). "Context-Free Languages and Turing Machine Computations". Proceedings of Symposia in Applied Mathematics. Mathematical Aspects of Computer Science. AMS. 19: 42–51. doi: 10.1090/psapm/019/0235938 . ISBN   9780821867280.
  5. 1 2 Parr, Terence; Quong, Russell (July 1995). "ANTLR: A Predicated-LL(k) Parser Generator" (PDF). Software: Practice and Experience. 25 (7): 789–810. doi:10.1002/spe.4380250705. S2CID   13453016.
  6. Stroustrup, Bjarne; Ellis, Margaret A. (1990). The Annotated C++ Reference Manual . Addison-Wesley. ISBN   9780201514599.
  7. Okhotin, Alexander (2001). "Conjunctive grammars" (PDF). Journal of Automata, Languages and Combinatorics. 6 (4): 519–535. doi:10.25596/jalc-2001-519. S2CID   18009960. Archived from the original (PDF) on 26 June 2019.
  8. Balmas, Françoise (20–23 September 1994). "An Augmented Pattern Matcher as a Tool to Synthesize Conceptual Descriptions of Programs". Proceedings KBSE '94. Ninth Knowledge-Based Software Engineering Conference. Proceedings of the Ninth Knowledged-Based Software Engineering Conference. Monterey, California. pp. 150–157. doi:10.1109/KBSE.1994.342667. ISBN   0-8186-6380-4.
  9. Ford, Bryan (September 2002). Packrat Parsing: a Practical Linear-Time Algorithm with Backtracking (Master’s thesis). Massachusetts Institute of Technology.
  10. Jackson, Quinn Tyler (March 2006). Adapting to Babel: Adaptivity & Context-Sensitivity in Parsing. Plymouth, Massachusetts: Ibis Publishing. CiteSeerX   10.1.1.403.8977 .
  11. Wall, Larry (2002–2006). "Synopsis 5: Regexes and Rules".
  12. "Grammar Definition Language". NorKen Technologies.
  13. Okhotin, Alexander (2000). "On Augmenting the Formalism of Context-Free Grammars with an Intersection Operation". Proceedings of the Fourth International Conference "Discrete Models in the Theory of Control Systems" (in Russian): 106–109.
  14. Okhotin, Alexander (August 2004). Boolean Grammars: Expressive Power and Algorithms (Doctoral thesis). Kingston, Ontario: School of Computing, Queens University.