Logical form (linguistics)

Last updated

In generative grammar and related approaches, the logical form (LF) of a linguistic expression is the variant of its syntactic structure which undergoes semantic interpretation. It is distinguished from phonetic form , the structure which corresponds to a sentence's pronunciation. These separate representations are postulated in order to explain the ways in which an expression's meaning can be partially independent of its pronunciation, e.g. scope ambiguities.

Contents

LF is the cornerstone of the classic generative view of the syntax-semantics interface. However, it is not used in Lexical Functional Grammar and Head-Driven Phrase Structure Grammar, as well as some modern variants of the generative approach.

Syntax interfacing with semantics

The notion of Logical Form was originally invented for the purpose of determining quantifier scope. As the theory around the Minimalist program developed, all output conditions, such as theta-criterion, the case filter, Subjacency and binding theory, are examined at the level of LF. The study of LF is more broad than the study of syntax. [1]

The notion of scope

The scope of an operator is the domain within which it has the ability to affect the interpretation of other expressions. In other words, an operator has scope of operation, or affecting the interpretation of other phrases, only within its own domain. Three uncontroversial examples of scope affecting some aspect of the interpretation are: quantifier-quantifier, quantifier-pronoun, quantifier-negative polarity item.

In instances where a negation has an indefinite article in its scope, the reader's interpretation is affected. The reader is not able to infer the existence of a relevant entity. If negation (or a negation phrase) is within the subject quantifier scope, negation is not affected by the quantifier. [2] If the Quantified Expresstion1 (QE1) is in the domain of QE2, but not vice versa, QE1 must take a narrow scope; if both are in the domain of the other, the structure is potentially ambiguous. If neither QE is in the domain of the other, they must be interpreted independently. [3] These assumptions explain the cases where the direct object of the main clause is not within the domain of the embedded subject. For example, that every boy left upset a teacher, it cannot be interpreted as for every boy, there is a possibly different teacher who was upset by the fact that the boy left. The only available interpretation is that one single teacher was upset. [2]

Everyone loves the same someone LF-everyone loves someone 2.png
Everyone loves the same someone
Everyone has someone that they love, not necessarily the same person LF-everyone loves someone.png
Everyone has someone that they love, not necessarily the same person

Ambiguity motivation

In syntax, LF exists to give a structural account of certain kinds of semantic ambiguities.

Example

Everyone loves someone.

This sentence is semantically ambiguous. Specifically, it contains a scope ambiguity. This ambiguity cannot be resolved at surface structure, since someone, being within the verb phrase, must be lower in the structure than everyone. This case exemplifies the general fact that natural language is insufficiently specified for strict logical meaning. Robert May argued for the postulation of LF partly in order to account for such ambiguities (among other motivations). At LF, the sentence above would have two possible structural representations, one for each possible scope-reading, in order to account for the ambiguity by structural differentiation. In this way it is similar in purpose to, but not the same as, logical form in logic. [4]

Quantification

Key historical developments

There has been discussion about quantification since the 1970s. In 1973, Richard Montague argued that a grammar for a small fragment of English contains the logicosyntactic and semantic devices to handle practically any scope phenomenon. [5] The tool that he mainly relied on is a categorial grammar with functional application; in terms of recent formulations, it can be considered Minimalist syntax with Merge only. However, this approach does not make predictions for some examples with inverse scope (wide scope in object position).

For example, everyone loves someone.

When there is no scope interaction in the relevant portion of the sentence, making either choice shows no difference in semantics.

A short time later, May suggested a different idea.[ citation needed ] In contrast to Montague, May did not propose any syntax that generates the surface string. He proposed a rule called Quantifier Raising (QR), which explains that movement operations of wh-movement continue to operate on the level of LF, and each phrase continues to possess the quantifier in its domain. May suggested that QR applies to all quantifier phrases with no exception.

The study of Quantification carried on in the 1980s. In contrast to May and Montague, it was suggested that independently motivated phrase structure, such as the relative clause, imposes a limitation on scope options. [6]

This clause boundedness somewhat restricts the QR. May also noticed a subject-object asymmetry with respect to the interaction of wh-words and quantifier phrases. [7] A modified version of his past work that QR determines quantifier scope but does not disambiguate it was brought up. To regulate the interaction, The Scope Principle that if two operators govern each other, they can be interpreted in either scopal order was also brought up. However, this solution has eventually been abandoned.

Alternative analyses have been proposed, since the emergence of Minimalism in the 1990s. This includes attempts to eliminate QR as an operation, and analyze its copal effects as by-products of independent grammatical processes. [8] The other strategy is to modify QR and show it can be fitted into a Minimalist structure.[ citation needed ]

Quantificational noun phrases

Danny Fox discusses syntactic positions of QNPs as a way of introducing and illustrating the basic semantic and syntactic relations found in LF. [9] By looking at the meaning of QNPs in relation to the property they are given, or their predicate, we can derive the meaning of the whole sentence.

a. A girl is tall.

b. Many girls are tall.

c. Every girl is tall.

d. No girl is tall. [9]

To understand the Logical Form of these examples, it is important to identify what the basic predicate is and which segments make up the QNPs. In these examples, the predicate is tall and the QNPs are a girl, many girls, every girl and no girl. The logical meaning of these sentences indicates that the property of being tall is attributed to some form of the QNP referring to girl. Along with the QNP and the predicate, there is also an inference of truth value. Either the truth value is True for a person who is tall, otherwise the truth value is False. [9]

Each of the examples above will have different conditions that make the statement true according to the quantifier that precedes girl. [9]

Truth Value conditions:

Example a. A girl has a truth value of true if and only if (iff) at least one girl is tall.
This quantifier is satisfied with 1 instance of a girl being tall.

Example b. Many girls has a truth value of true iff there are many girls who are tall.
This quantifier is satisfied with more than 1 instance of a girl being tall.

Example c. Every girl has a truth value of true iff every girl is tall.
This quantifier requires for all girls, that every instance of a person being female, she must be tall.

Example d. No girl has a truth value of true iff no girl is tall.
This quantifier requires for all girls, that for all instances of a person being female, she must not be tall. [9]

In a syntactic tree, the structure is represented as such: "the argument of a QNP is always the sister of the QNP." [9]

Wh-movement

In linguistics, wh-phrases are operators binding variables at LF, like other quantifier noun phrases. Scope interpretations can be constrained by syntactic constraints as shown in LF when regarding the scope of wh-phrases and quantifiers. When wh-movement is from the subject position it is unambiguous, but when wh-movement is from the object position it is ambiguous. [7]

Examples

1) What did everyone buy for Max?

[S what2 [S everyone3 [S e3 bought e2 for max]
(Two possible interpretations: what did everyone
collectively buy, versus individually buy)

2) Who bought everything for Max?

[S who3 [S everything2 [S e3 bought e2 for max]
(Only one possible interpretation.)

This example demonstrates the effect of the Path Containment Condition (PCC). An A-path is a line of dominating nodes that go from the trace to a c-commanding A-binder. If two of the A paths intersect then one must be contained in the other. If the paths are overlapping without having one being contained in the other, then it is ill-formed. (2)'s paths are overlapping, violating PCC, therefore in order to obtain a grammatical LF structure, everything needs to join the VP. The LF structure then becomes:

LF REPRESENTATION:
[s who3 [s e3 [vp everything2 [vp bought e2 for max]

Cross-linguistic examples

Hungarian

Öt

five

orvos

doctor

minden

every

betegnek

patient-DAT

kevés

few

új

new

tablettát

pill-ACC

írt

wrote

fel.

up

Öt orvos minden betegnek kevés új tablettát írt fel.

five doctor every patient-DAT few new pill-ACC wrote up

"There are five doctors x such that for every patient y, x prescribed few new pills to y."

*Öt

five

orvos

doctor

kevés

few

betegnek

patient-DAT

minden

every

új

new

tablettát

pill-ACC

írt

wrote

fel.

up

*Öt orvos kevés betegnek minden új tablettát írt fel.

five doctor few patient-DAT every new pill-ACC wrote up

"There are five doctors x such that for some patient y,x prescribed a new pill to y."

In the sentence, "Five doctors prescribed few new pills to every patient.", the scope in Hungarian is largely disambiguated by the linear order of quantifiers on the surface. Two facts that should be kept in mind are (1) the linear order is not obtained by putting quantifiers together in the desired order, which contradicts the predictions made by Montague or May's theory; (2) the linear order is not determined by case or grammatical functions, which supports the prediction of Hornstein's theory.[ citation needed ]

Chinese

1.

要是

Yàoshi

if

两个

liǎngge

two

女人

nǚrén

women

读过

dúguo

read+ASP

每本

měiběn

every

书。。。

shū...

book

要是 两个 女人 读过 每本 书。。。

Yàoshi liǎngge nǚrén dúguo měiběn shū...

if two women read+ASP every book

i. "if there are two women who read every book..."
ii. *"if for every book, there are two women who read it..."

2.

要是

Yàoshi

if

两个

liǎngge

two

线索

xiànsuǒ

clues

bèi

by

每个人

měigerén

everyone

找到。。。

zhǎodào...

found

要是 两个 线索 被 每个人 找到。。。

Yàoshi liǎngge xiànsuǒ bèi měigerén zhǎodào...

if two clues by everyone found

i. "if there are two clues that are found by everyone..."
ii. "if for everyone, there are two clues she or he finds..."

The significance of A-chains has been emphasized in the Chinese language. Scope in Chinese is disambiguated by case positions in some examples. [10] In this example, the active sentence only has subject wide scope, but the passive sentence is ambiguous. The active sentence only has one interpretation: if there are two women who read every book, which is in the subject wide scope. According to Aoun and Li, Chinese does not have VP-internal subjects, thus, liangge nuren cannot be reconstructed in LF. So the sentence has no ambiguous interpretation. However, the passive sentence has two interpretations, 1. everyone finds the same two clues; 2. everyone finds two clues, while two clues can be different ones. That is because liangge xiansuo is in VP-internal complement position, then in LF, it can be reconstructed. So the passive sentence has two different interpretations.

English

A boy climbed every tree.
i. A single boy climbed all the trees.
ii.For every tree there is a boy, who may be different for each tree, that climbed tree X.

This phrase is ambiguous in that it can be interpreted as the noun 'boy' referring to a particular individual or to a different individual for each instance of 'tree' under the quantifier 'every'. [9] The interpretation that a single boy climbed all the trees takes a wide scope, while the other interpretation that for every tree there is a boy, who may be different for each tree takes a narrow scope.

See also

Related Research Articles

In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.

In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations to produce new sentences from existing ones.

An adjective phrase is a phrase whose head is an adjective. Almost any grammar or syntax textbook or dictionary of linguistics terminology defines the adjective phrase in a similar way, e.g. Kesner Bland (1996:499), Crystal (1996:9), Greenbaum (1996:288ff.), Haegeman and Guéron (1999:70f.), Brinton (2000:172f.), Jurafsky and Martin (2000:362). The adjective can initiate the phrase, conclude the phrase, or appear in a medial position. The dependents of the head adjective—i.e. the other words and phrases inside the adjective phrase—are typically adverb or prepositional phrases, but they can also be clauses. Adjectives and adjective phrases function in two basic ways, attributively or predicatively. An attributive adjective (phrase) precedes the noun of a noun phrase. A predicative adjective (phrase) follows a linking verb and serves to describe the preceding subject, e.g. The man is very happy.

Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.

In linguistics, X-bar theory is a model of phrase-structure grammar and a theory of syntactic category formation that was first proposed by Noam Chomsky in 1970 reformulating the ideas of Zellig Harris (1951), and further developed by Ray Jackendoff, along the lines of the theory of generative grammar put forth in the 1950s by Chomsky. It attempts to capture the structure of phrasal categories with a single uniform structure called the X-bar schema, basing itself on the assumption that any phrase in natural language is an XP that is headed by a given syntactic category X. It played a significant role in resolving issues that phrase structure rules had, representative of which is the proliferation of grammatical rules, which is against the thesis of generative grammar.

<span class="mw-page-title-main">Generative grammar</span> Research tradition in linguistics

Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists tend to share certain working assumptions such as the competence-performance distinction and the notion that some domain-specific aspects of grammar are partly innate. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition.

In linguistics, the minimalist program is a major line of inquiry that has been developing inside generative grammar since the early 1990s, starting with a 1993 paper by Noam Chomsky.

Montague grammar is an approach to natural language semantics, named after American logician Richard Montague. The Montague grammar is based on mathematical logic, especially higher-order predicate logic and lambda calculus, and makes use of the notions of intensional logic, via Kripke models. Montague pioneered this approach in the 1960s and early 1970s.

In generative grammar and related frameworks, a node in a parse tree c-commands its sister node and all of its sister's descendants. In these frameworks, c-command plays a central role in defining and constraining operations such as syntactic movement, binding, and scope. Tanya Reinhart introduced c-command in 1976 as a key component of her theory of anaphora. The term is short for "constituent command".

The term predicate is used in two ways in linguistics and its subfields. The first defines a predicate as everything in a standard declarative sentence except the subject, and the other defines it as only the main content verb or associated predicative expression of a clause. Thus, by the first definition, the predicate of the sentence Frank likes cake is likes cake, while by the second definition, it is only the content verb likes, and Frank and cake are the arguments of this predicate. The conflict between these two definitions can lead to confusion.

In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

Antecedent-contained deletion (ACD), also called antecedent-contained ellipsis, is a phenomenon whereby an elided verb phrase appears to be contained within its own antecedent. For instance, in the sentence "I read every book that you did", the verb phrase in the main clause appears to license ellipsis inside the relative clause which modifies its object. ACD is a classic puzzle for theories of the syntax-semantics interface, since it threatens to introduce an infinite regress. It is commonly taken as motivation for syntactic transformations such as quantifier raising, though some approaches explain it using semantic composition rules or by adoption more flexible notions of what it means to be a syntactic unit.

In generative grammar, the technical term operator denotes a type of expression that enters into an a-bar movement dependency. One often says that the operator "binds a variable".

The linguistics wars were extended disputes among American theoretical linguists that occurred mostly during the 1960s and 1970s, stemming from a disagreement between Noam Chomsky and several of his associates and students. The debates started in 1967 when linguists Paul Postal, John R. Ross, George Lakoff, and James D. McCawley —self-dubbed the "Four Horsemen of the Apocalypse"—proposed an alternative approach in which the relation between semantics and syntax is viewed differently, which treated deep structures as meaning rather than syntactic objects. While Chomsky and other generative grammarians argued that meaning is driven by an underlying syntax, generative semanticists posited that syntax is shaped by an underlying meaning. This intellectual divergence led to two competing frameworks in generative semantics and interpretive semantics.

Merge is one of the basic operations in the Minimalist Program, a leading approach to generative syntax, when two syntactic objects are combined to form a new syntactic unit. Merge also has the property of recursion in that it may be applied to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge. This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is "an indispensable operation of a recursive system ... which takes two syntactic objects A and B and forms the new object G={A,B}" (p. 2).

In semantics, a donkey sentence is a sentence containing a pronoun which is semantically bound but syntactically free. They are a classic puzzle in formal semantics and philosophy of language because they are fully grammatical and yet defy straightforward attempts to generate their formal language equivalents. In order to explain how speakers are able to understand them, semanticists have proposed a variety of formalisms including systems of dynamic semantics such as Discourse representation theory. Their name comes from the example sentence "Every farmer who owns a donkey beats it", in which the donkey pronoun acts as a donkey pronoun because it is semantically but not syntactically bound by the indefinite noun phrase "a donkey". The phenomenon is known as donkey anaphora.

Formal semantics is the study of grammatical meaning in natural languages using formal tools from logic, mathematics and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.

In formal semantics, the scope of a semantic operator is the semantic object to which it applies. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. Scope can be thought of as the semantic order of operations.

In formal semantics, a type shifter is an interpretation rule that changes an expression's semantic type. For instance, the English expression "John" might ordinarily denote John himself, but a type shifting rule called Lift can raise its denotation to a function which takes a property and returns "true" if John himself has that property. Lift can be seen as mapping an individual onto the principal ultrafilter that it generates.

  1. Without type shifting:
  2. Type shifting with Lift:

In linguistics, the syntax–semantics interface is the interaction between syntax and semantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning. Specific topics include scope, binding, and lexical semantic properties such as verbal aspect and nominal individuation, semantic macroroles, and unaccusativity.

References

Bibliography