Lexical semantics

Last updated

Lexical semantics (also known as lexicosemantics), as a subfield of linguistic semantics, is the study of word meanings. [1] [2] It includes the study of how words structure their meaning, how they act in grammar and compositionality, [1] and the relationships between the distinct senses and uses of a word. [2]

Contents

The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units include the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantics interface. [3]

The study of lexical semantics concerns:

Lexical units, also referred to as syntactic atoms, can be independent such as in the case of root words or parts of compound words or they require association with other units, as prefixes and suffixes do. The former are termed free morphemes and the latter bound morphemes. [4] They fall into a narrow range of meanings (semantic fields) and can combine with each other to generate new denotations.

Cognitive semantics is the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics, introducing innovations like prototype theory, conceptual metaphors, and frame semantics. [5]

Lexical relations

Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in the lexicon. [6] Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment.

Lexical items participate in regular patterns of association with each other. Some relations between lexical items include hyponymy, hypernymy, synonymy, and antonymy, as well as homonymy. [6]

Hyponymy and hypernymy

Hyponymy and hypernymy refers to a relationship between a general term and the more specific terms that fall under the category of the general term.

For example, the colors red, green, blue and yellow are hyponyms. They fall under the general term of color, which is the hypernym.

Taxonomy showing the hypernym "color" Colortaxonomy.png
Taxonomy showing the hypernym "color"
Color (hypernym) → red, green, yellow, blue (hyponyms)

Hyponyms and hypernyms can be described by using a taxonomy, as seen in the example.

Synonym

Synonym refers to words that are pronounced and spelled differently but contain the same meaning.

Happy, joyful, glad [6] 

Antonym

Antonym refers to words that are related by having the opposite meanings to each other. There are three types of antonyms: graded antonyms, complementary antonyms, and relational antonyms.

Sleep, awake [6] long, short

Homonymy

Homonymy refers to the relationship between words that are spelled or pronounced the same way but hold different meanings.

bank (of river)bank (financial institution)

Polysemy

Polysemy refers to a word having two or more related meanings.

bright (shining)bright (intelligent)
An example of a semantic network Semantic Net.svg
An example of a semantic network

Ambonym

Ambonym refers to the relationship between two words that apply them as both synonyms and antonyms simultaneously due to multiple definitions for the individual words. Not to be confused with contronym.

robbery (to take without permission; a swindling or overcharging)steal (to take without permission; a bargain)

Semantic networks

Lexical semantics also explores whether the meaning of a lexical unit is established by looking at its neighbourhood in the semantic net, (words it occurs with in natural sentences), or whether the meaning is already locally contained in the lexical unit.

In English, WordNet is an example of a semantic network. It contains English words that are grouped into synsets. Some semantic relations between these synsets are meronymy, hyponymy, synonymy, and antonymy.

Semantic fields

How lexical items map onto concepts

First proposed by Trier in the 1930s, [7] semantic field theory proposes that a group of words with interrelated meanings can be categorized under a larger conceptual domain. This entire entity is thereby known as a semantic field. The words boil, bake, fry, and roast, for example, would fall under the larger semantic category of cooking. Semantic field theory asserts that lexical meaning cannot be fully understood by looking at a word in isolation, but by looking at a group of semantically related words. [8] Semantic relations can refer to any relationship in meaning between lexemes, including synonymy (big and large), antonymy (big and small), hypernymy and hyponymy (rose and flower), converseness (buy and sell), and incompatibility. Semantic field theory does not have concrete guidelines that determine the extent of semantic relations between lexemes. The abstract validity of the theory is a subject of debate. [7]

Knowing the meaning of a lexical item therefore means knowing the semantic entailments the word brings with it. However, it is also possible to understand only one word of a semantic field without understanding other related words. Take, for example, a taxonomy of plants and animals: it is possible to understand the words rose and rabbit without knowing what a marigold or a muskrat is. This is applicable to colors as well, such as understanding the word red without knowing the meaning of scarlet, but understanding scarlet without knowing the meaning of red may be less likely. A semantic field can thus be very large or very small, depending on the level of contrast being made between lexical items. While cat and dog both fall under the larger semantic field of animal, including the breed of dog, like German shepherd, would require contrasts between other breeds of dog (e.g. corgi, or poodle), thus expanding the semantic field further. [9]

How lexical items map onto events

Event structure is defined as the semantic relation of a verb and its syntactic properties. [10] Event structure has three primary components: [11]

Verbs can belong to one of three types: states, processes, or transitions.

(1) a. The door is closed. [11]     b. The door closed.    c. John closed the door.

(1a) defines the state of the door being closed; there is no opposition in this predicate. (1b) and (1c) both have predicates showing transitions of the door going from being implicitly open to closed. (1b) gives the intransitive use of the verb close, with no explicit mention of the causer, but (1c) makes explicit mention of the agent involved in the action.

Syntactic basis of event structure: a brief history

Generative semantics in the 1960s

The analysis of these different lexical units had a decisive role in the field of "generative linguistics" during the 1960s. [12] The term generative was proposed by Noam Chomsky in his book Syntactic Structures published in 1957. The term generative linguistics was based on Chomsky's generative grammar, a linguistic theory that states systematic sets of rules (X' theory) can predict grammatical phrases within a natural language. [13] Generative Linguistics is also known as Government-Binding Theory. Generative linguists of the 1960s, including Noam Chomsky and Ernst von Glasersfeld, believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization. [12] This meant that they saw a simple verb phrase as encompassing a more complex syntactic structure. [12]

Lexicalist theories in the 1980s

Lexicalist theories became popular during the 1980s, and emphasized that a word's internal structure was a question of morphology and not of syntax. [14] Lexicalist theories emphasized that complex words (resulting from compounding and derivation of affixes) have lexical entries that are derived from morphology, rather than resulting from overlapping syntactic and phonological properties, as Generative Linguistics predicts. The distinction between Generative Linguistics and Lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction:

A lexical entry lists the basic properties of either the whole word, or the individual properties of the morphemes that make up the word itself. The properties of lexical items include their category selection c-selection, selectional properties s-selection, (also known as semantic selection), [12] phonological properties, and features. The properties of lexical items are idiosyncratic, unpredictable, and contain specific information about the lexical items that they describe. [12]

The following is an example of a lexical entry for the verb put:

put: V DPagent DPexperiencer/PPlocative

Lexicalist theories state that a word's meaning is derived from its morphology or a speaker's lexicon, and not its syntax. The degree of morphology's influence on overall grammar remains controversial. [12] Currently, the linguists that perceive one engine driving both morphological items and syntactic items are in the majority.

Micro-syntactic theories: 1990s to the present

By the early 1990s, Chomsky's minimalist framework on language structure led to sophisticated probing techniques for investigating languages. [15] These probing techniques analyzed negative data over prescriptive grammars, and because of Chomsky's proposed Extended Projection Principle in 1986, probing techniques showed where specifiers of a sentence had moved to in order to fulfill the EPP. This allowed syntacticians to hypothesize that lexical items with complex syntactic features (such as ditransitive, inchoative, and causative verbs), could select their own specifier element within a syntax tree construction. (For more on probing techniques, see Suci, G., Gammon, P., & Gamlin, P. (1979)).

This brought the focus back on the syntax-lexical semantics interface; however, syntacticians still sought to understand the relationship between complex verbs and their related syntactic structure, and to what degree the syntax was projected from the lexicon, as the Lexicalist theories argued.

In the mid 1990s, linguists Heidi Harley, Samuel Jay Keyser, and Kenneth Hale addressed some of the implications posed by complex verbs and a lexically-derived syntax. Their proposals indicated that the predicates CAUSE and BECOME, referred to as subunits within a Verb Phrase, acted as a lexical semantic template. [16] Predicates are verbs and state or affirm something about the subject of the sentence or the argument of the sentence. For example, the predicates went and is here below affirm the argument of the subject and the state of the subject respectively.

Lucy went home.The parcel is here.

The subunits of Verb Phrases led to the Argument Structure Hypothesis and Verb Phrase Hypothesis, both outlined below. [17] The recursion found under the "umbrella" Verb Phrase, the VP Shell, accommodated binary-branching theory; another critical topic during the 1990s. [18] Current theory recognizes the predicate in Specifier position of a tree in inchoative/anticausative verbs (intransitive), or causative verbs (transitive) is what selects the theta role conjoined with a particular verb. [12]

Hale & Keyser 1990

Hale and Keyser 1990 structure SyntacticTreeputHaleandKeyser.png
Hale and Keyser 1990 structure

Kenneth Hale and Samuel Jay Keyser introduced their thesis on lexical argument structure during the early 1990s. [19] They argue that a predicate's argument structure is represented in the syntax, and that the syntactic representation of the predicate is a lexical projection of its arguments. Thus, the structure of a predicate is strictly a lexical representation, where each phrasal head projects its argument onto a phrasal level within the syntax tree. The selection of this phrasal head is based on Chomsky's Empty Category Principle. This lexical projection of the predicate's argument onto the syntactic structure is the foundation for the Argument Structure Hypothesis. [19] This idea coincides with Chomsky's Projection Principle, because it forces a VP to be selected locally and be selected by a Tense Phrase (TP).

Based on the interaction between lexical properties, locality, and the properties of the EPP (where a phrasal head selects another phrasal element locally), Hale and Keyser make the claim that the Specifier position or a complement are the only two semantic relations that project a predicate's argument. In 2003, Hale and Keyser put forward this hypothesis and argued that a lexical unit must have one or the other, Specifier or Complement, but cannot have both. [20]

Halle & Marantz 1993

Halle & Marantz 1993 structure Distributedmorphtree.png
Halle & Marantz 1993 structure

Morris Halle and Alec Marantz introduced the notion of distributed morphology in 1993. [21] This theory views the syntactic structure of words as a result of morphology and semantics, instead of the morpho-semantic interface being predicted by the syntax. Essentially, the idea that under the Extended Projection Principle there is a local boundary under which a special meaning occurs. This meaning can only occur if a head-projecting morpheme is present within the local domain of the syntactic structure. [22] The following is an example of the tree structure proposed by distributed morphology for the sentence "John's destroying the city". Destroy is the root, V-1 represents verbalization, and D represents nominalization. [22]

Ramchand 2008

In her 2008 book, Verb Meaning and The Lexicon: A First-Phase Syntax, linguist Gillian Ramchand acknowledges the roles of lexical entries in the selection of complex verbs and their arguments. [23] 'First-Phase' syntax proposes that event structure and event participants are directly represented in the syntax by means of binary branching. This branching ensures that the Specifier is the consistently subject, even when investigating the projection of a complex verb's lexical entry and its corresponding syntactic construction. This generalization is also present in Ramchand's theory that the complement of a head for a complex verb phrase must co-describe the verb's event.

Ramchand also introduced the concept of Homomorphic Unity, which refers to the structural synchronization between the head of a complex verb phrase and its complement. According to Ramchand, Homomorphic Unity is "when two event descriptors are syntactically Merged, the structure of the complement must unify with the structure of the head." [23]

Classification of event types

Intransitive verbs: unaccusative versus unergative

Unaccusativeexample.png
Underlying tree structure for (2a)
Unergativeexample.png
Underlying tree structure for (2b)

The unaccusative hypothesis was put forward by David Perlmutter in 1987, and describes how two classes of intransitive verbs have two different syntactic structures. These are unaccusative verbs and unergative verbs. [24] These classes of verbs are defined by Perlmutter only in syntactic terms. They have the following structures underlyingly:

The following is an example from English:

(2) Unaccusative     a. Mary fell. [25] Unergative     b. Mary worked.

In (2a) the verb underlyingly takes a direct object, while in (2b) the verb underlyingly takes a subject.

Transitivity alternations: the inchoative/causative alternation

The change-of-state property of Verb Phrases (VP) is a significant observation for the syntax of lexical semantics because it provides evidence that subunits are embedded in the VP structure, and that the meaning of the entire VP is influenced by this internal grammatical structure. (For example, the VP the vase broke carries a change-of-state meaning of the vase becoming broken, and thus has a silent BECOME subunit within its underlying structure.) There are two types of change-of-state predicates: inchoative and causative.

Inchoative verbs are intransitive, meaning that they occur without a direct object, and these verbs express that their subject has undergone a certain change of state. Inchoative verbs are also known as anticausative verbs. [26] Causative verbs are transitive, meaning that they occur with a direct object, and they express that the subject causes a change of state in the object.

Linguist Martin Haspelmath classifies inchoative/causative verb pairs under three main categories: causative, anticausative, and non-directed alternations. [27] Non-directed alternations are further subdivided into labile, equipollent, and suppletive alternations.

Vasebreak.png
Underlying tree structure for (3a)
Johnbrokevase.png
Underlying tree structure for (3b)

English tends to favour labile alternations, [28] meaning that the same verb is used in the inchoative and causative forms. [27] This can be seen in the following example: broke is an intransitive inchoative verb in (3a) and a transitive causative verb in (3b).

(3) English [26]      a. The vase broke.     b. John broke the vase.

As seen in the underlying tree structure for (3a), the silent subunit BECOME is embedded within the Verb Phrase (VP), resulting in the inchoative change-of-state meaning (y become z). In the underlying tree structure for (3b), the silent subunits CAUS and BECOME are both embedded within the VP, resulting in the causative change-of-state meaning (x cause y become z). [12]

English change of state verbs are often de-adjectival, meaning that they are derived from adjectives. We can see this in the following example:

(4) a. The knot is loose. [29]      b. The knot loosened.     c. Sandy loosened the knot.

In example (4a) we start with a stative intransitive adjective, and derive (4b) where we see an intransitive inchoative verb. In (4c) we see a transitive causative verb.

Marked inchoatives

Some languages (e.g., German, Italian, and French), have multiple morphological classes of inchoative verbs. [30] Generally speaking, these languages separate their inchoative verbs into three classes: verbs that are obligatorily unmarked (they are not marked with a reflexive pronoun, clitic, or affix), verbs that are optionally marked, and verbs that are obligatorily marked. The causative verbs in these languages remain unmarked. Haspelmath refers to this as the anticausative alternation.

Zerbrach.png
Underlying tree structure for (4a)
Hanszerbrach.png
Underlying tree structure for (4b)

For example, inchoative verbs in German are classified into three morphological classes. Class A verbs necessarily form inchoatives with the reflexive pronoun sich, Class B verbs form inchoatives necessarily without the reflexive pronoun, and Class C verbs form inchoatives optionally with or without the reflexive pronoun. In example (5), the verb zerbrach is an unmarked inchoative verb from Class B, which also remains unmarked in its causative form. [30]

(5) German [30]      a. Die Vase zerbrach.        the  vase  broke        'The vase broke.'     b. Hans zerbrach die Vase.        John  broke   the vase        'John broke the vase.'
Offnete.png
Underlying tree structure for (5a)
Hansoffnete.png
Underlying tree structure for (5b)

In contrast, the verb öffnete is a Class A verb which necessarily takes the reflexive pronoun sich in its inchoative form, but remains unmarked in its causative form.

(6) German [30]      a. Die Tür öffnete sich.        the door opened REFL        'The door opened.'     b. Hans öffnete die Tür.        John opened the door        'John opened the door.'

There has been some debate as to whether the different classes of inchoative verbs are purely based in morphology, or whether the differentiation is derived from the lexical-semantic properties of each individual verb. While this debate is still unresolved in languages such as Italian, French, and Greek, it has been suggested by linguist Florian Schäfer that there are semantic differences between marked and unmarked inchoatives in German. Specifically, that only unmarked inchoative verbs allow an unintentional causer reading (meaning that they can take on an "x unintentionally caused y" reading). [30]

Marked causatives

Angbata.png
Underlying tree structure for (7a)
Sirosa2.png
Underlying tree structure for (7b)

Causative morphemes are present in the verbs of many languages (e.g., Tagalog, Malagasy, Turkish, etc.), usually appearing in the form of an affix on the verb. [26] This can be seen in the following examples from Tagalog, where the causative prefix pag- (realized here as nag) attaches to the verb tumba to derive a causative transitive verb in (7b), but the prefix does not appear in the inchoative intransitive verb in (7a). Haspelmath refers to this as the causative alternation.

(7) Tagalog [26]      a. Tumumba ang bata.        fell    the child        'The child fell.'     b. Nagtumba ng bata si Rosa.CAUS-fall of child DET Rosa        'Rosa knocked the child down.'

Ditransitive verbs

Kayne's 1981 unambiguous path analysis

Unambiguouspathstree.png
Tree diagram (8a)
Unambiguouspathtree.png
Tree diagram (8b)

Richard Kayne proposed the idea of unambiguous paths as an alternative to c-commanding relationships, which is the type of structure seen in examples (8). The idea of unambiguous paths stated that an antecedent and an anaphor should be connected via an unambiguous path. This means that the line connecting an antecedent and an anaphor cannot be broken by another argument. [31] When applied to ditransitive verbs, this hypothesis introduces the structure in diagram (8a). In this tree structure it can be seen that the same path can be traced from either DP to the verb. Tree diagram (7b) illustrates this structure with an example from English. This analysis was a step toward binary branching trees, which was a theoretical change that was furthered by Larson's VP-shell analysis. [32]

Larson's 1988 "VP-shell" analysis

LexicalSemanticsDOCtree.png
Tree diagram for (9a)
Lexicalsemantics7btree.png
Tree diagram for (9b)

Larson posited his Single Complement Hypothesis in which he stated that every complement is introduced with one verb. The Double Object Construction presented in 1988 gave clear evidence of a hierarchical structure using asymmetrical binary branching. [32] Sentences with double objects occur with ditransitive verbs, as we can see in the following example:

Larson's proposed binary-branching VP-shell structure for (9) Vpshelltree.png
Larson's proposed binary-branching VP-shell structure for (9)
(9) a. John sent Mary a package. [33]      b. John sent a package to Mary.

It appears as if the verb send has two objects, or complements (arguments): both Mary, the recipient and parcel, the theme. The argument structure of ditransitive verb phrases is complex and has undergone different structural hypothesis.

The original structural hypothesis was that of ternary branching seen in (9a) and (9b), but following from Kayne's 1981 analysis, Larson maintained that each complement is introduced by a verb. [31] [32]

Their hypothesis shows that there is a lower verb embedded within a VP shell that combines with an upper verb (can be invisible), thus creating a VP shell (as seen in the tree diagram to the right). Most current theories no longer allow the ternary tree structure of (9a) and (9b), so the theme and the goal/recipient are seen in a hierarchical relationship within a binary branching structure. [34]

Following are examples of Larson's tests to show that the hierarchical (superior) order of any two objects aligns with a linear order, so that the second is governed (c-commanded) by the first. [32] This is in keeping with X'Bar Theory of Phrase Structure Grammar, with Larson's tree structure using the empty Verb to which the V is raised.

Reflexives and reciprocals (anaphors) show this relationship in which they must be c-commanded by their antecedents, such that the (10a) is grammatical but (10b) is not:

(10) a. I showed Mary herself. [32]      b. *I showed herself Mary.

A pronoun must have a quantifier as its antecedent:

(11) a.  I gave every worker his paycheck. [32]       b. *I gave its owner every paycheck.

Question words follow this order:

(12) a. Who did you give which paycheck? [32]       b. *Which paycheck did you give who?

The effect of negative polarity means that "any" must have a negative quantifier as an antecedent:

General tree diagram for Larson's proposed underlying structure of a sentence with causative meaning Larsoncausative.png
General tree diagram for Larson's proposed underlying structure of a sentence with causative meaning
(13) a. I showed no one anything. [32]       b. *I showed anyone nothing.

These tests with ditransitive verbs that confirm c-command also confirm the presence of underlying or invisible causative verbs. In ditransitive verbs such as give someone something, send someone something, show someone something etc. there is an underlying causative meaning that is represented in the underlying structure. As seen in example in (9a) above, John sent Mary a package, there is the underlying meaning that 'John "caused" Mary to have a package'.

Larson proposed that both sentences in (9a) and (9b) share the same underlying structure and the difference on the surface lies in that the double object construction "John sent Mary a package" is derived by transformation from a NP plus PP construction "John sent a package to Mary".

Beck & Johnson's 2004 double object construction

Beck and Johnson, however, give evidence that the two underlying structures are not the same. [35] In so doing, they also give further evidence of the presence of two VPs where the verb attaches to a causative verb. In examples (14a) and (b), each of the double object constructions are alternated with NP + PP constructions.

(14) a. Satoshi sent Tubingen the Damron Guide. [35]       b. Satoshi sent the Damron Guide to Tübingen.

Beck and Johnson show that the object in (15a) has a different relation to the motion verb as it is not able to carry the meaning of HAVING which the possessor (9a) and (15a) can. In (15a), Satoshi is an animate possessor and so is caused to HAVE kisimen. The PP for Satoshi in (15b) is of a benefactive nature and does not necessarily carry this meaning of HAVE either.

(15) a. Thilo cooked Satoshi kisimen. [35]       b. Thilo cooked kisimen for Satoshi.

The underlying structures are therefore not the same. The differences lie in the semantics and the syntax of the sentences, in contrast to the transformational theory of Larson. Further evidence for the structural existence of VP shells with an invisible verbal unit is given in the application of the adjunct or modifier "again". Sentence (16) is ambiguous and looking into the two different meanings reveals a difference in structure.

(16) Sally opened the door again. [35] 
Larson3.png
Underlying tree structure for (17a)
Larson1.png
Underlying tree structure for (17b)

However, in (17a), it is clear that it was Sally who repeated the action of opening the door. In (17b), the event is in the door being opened and Sally may or may not have opened it previously. To render these two different meanings, "again" attaches to VPs in two different places, and thus describes two events with a purely structural change.

(17) a. Sally was so kind that she went out of her way to open the dooronce again. [35]       b. The doors had just been shut to keep out the bugs but Sally openedthe door again.

See also

Related Research Articles

In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.

In general linguistics, a labile verb is a verb that undergoes causative alternation; that is, it can be used both transitively and intransitively, with the requirement that the direct object of its transitive use corresponds to the subject of its intransitive use, as in "I ring the bell" and "The bell rings." Labile verbs are a prominent feature of English, but they also occur in many other languages. When causatively alternating verbs are used transitively they are called causatives since, in the transitive use of the verb, the subject is causing the action denoted by the intransitive version. When causatively alternating verbs are used intransitively, they are referred to as anticausatives or inchoatives because the intransitive variant describes a situation in which the theme participant undergoes a change of state, becoming, for example, "opened".

Lexical functional grammar (LFG) is a constraint-based grammar framework in theoretical linguistics. It posits two separate levels of syntactic structure, a phrase structure grammar representation of word order and constituency, and a representation of grammatical functions such as subject and object, similar to dependency grammar. The development of the theory was initiated by Joan Bresnan and Ronald Kaplan in the 1970s, in reaction to the theory of transformational grammar which was current in the late 1970s. It mainly focuses on syntax, including its relation with morphology and semantics. There has been little LFG work on phonology.

In linguistics, a causative is a valency-increasing operation that indicates that a subject either causes someone or something else to do or be something or causes a change in state of a non-volitional event. Normally, it brings in a new argument, A, into a transitive clause, with the original subject S becoming the object O.

Theta roles are the names of the participant roles associated with a predicate: the predicate may be a verb, an adjective, a preposition, or a noun. If an object is in motion or in a steady state as the speakers perceives the state, or it is the topic of discussion, it is called a theme. The participant is usually said to be an argument of the predicate. In generative grammar, a theta role or θ-role is the formal device for representing syntactic argument structure—the number and type of noun phrases—required syntactically by a particular verb. For example, the verb put requires three arguments.

Dependency grammar (DG) is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back primarily to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. Dependency grammar differs from phrase structure grammar in that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word and its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, and they are thus well suited for the analysis of languages with free word order, such as Czech or Warlpiri.

In linguistics, focus is a grammatical category that conveys which part of the sentence contributes new, non-derivable, or contrastive information. In the English sentence "Mary only insulted BILL", focus is expressed prosodically by a pitch accent on "Bill" which identifies him as the only person Mary insulted. By contrast, in the sentence "Mary only INSULTED Bill", the verb "insult" is focused and thus expresses that Mary performed no other actions towards Bill. Focus is a cross-linguistic phenomenon and a major topic in linguistics. Research on focus spans numerous subfields including phonetics, syntax, semantics, pragmatics, and sociolinguistics.

In linguistics, valency or valence is the number and type of arguments controlled by a predicate, content verbs being typical predicates. Valency is related, though not identical, to subcategorization and transitivity, which count only object arguments – valency counts all arguments, including the subject. The linguistic meaning of valency derives from the definition of valency in chemistry. Like valency found in chemistry, there is the binding of specific elements. In the grammatical theory of valency, the verbs organize sentences by binding the specific elements. Examples of elements that would be bound would be the complement and the actant. Although the term originates from valence in chemistry, linguistic valency has a close analogy in mathematics under the term arity.

In generative grammar, non-configurational languages are languages characterized by a flat phrase structure, which allows syntactically discontinuous expressions, and a relatively free word order.

In linguistics, especially within generative grammar, phi features are the morphological expression of a semantic process in which a word or morpheme varies with the form of another word or phrase in the same sentence. This variation can include person, number, gender, and case, as encoded in pronominal agreement with nouns and pronouns. Several other features are included in the set of phi-features, such as the categorical features ±N (nominal) and ±V (verbal), which can be used to describe lexical categories and case features.

The term predicate is used in two ways in linguistics and its subfields. The first defines a predicate as everything in a standard declarative sentence except the subject, and the other defines it as only the main content verb or associated predicative expression of a clause. Thus, by the first definition, the predicate of the sentence Frank likes cake is likes cake, while by the second definition, it is only the content verb likes, and Frank and cake are the arguments of this predicate. The conflict between these two definitions can lead to confusion.

In linguistics, nominalization or nominalisation is the use of a word that is not a noun as a noun, or as the head of a noun phrase. This change in functional category can occur through morphological transformation, but it does not always. Nominalization can refer, for instance, to the process of producing a noun from another part of speech by adding a derivational affix, but it can also refer to the complex noun that is formed as a result.

In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Lucien Tesnière (1959).

In linguistics, volition is a concept that distinguishes whether the subject, or agent of a particular sentence intended an action or not. Simply, it is the intentional or unintentional nature of an action. Volition concerns the idea of control and for the purposes outside of psychology and cognitive science, is considered the same as intention in linguistics. Volition can then be expressed in a given language using a variety of possible methods. These sentence forms usually indicate that a given action has been done intentionally, or willingly. There are various ways of marking volition cross-linguistically. When using verbs of volition in English, like "want" or "prefer", these verbs are not expressly marked. Other languages handle this with affixes, while others have complex structural consequences of volitional or non-volitional encoding.

In certain theories of linguistics, thematic relations, also known as semantic roles, are the various roles that a noun phrase may play with respect to the action or state described by a governing verb, commonly the sentence's main verb. For example, in the sentence "Susan ate an apple", Susan is the doer of the eating, so she is an agent; an apple is the item that is eaten, so it is a patient.

In linguistics, subcategorization denotes the ability/necessity for lexical items to require/allow the presence and types of the syntactic arguments with which they co-occur. For example, the word "walk" as in "X walks home" requires the noun-phrase X to be animate.

The lexical integrity hypothesis (LIH) or lexical integrity principle is a hypothesis in linguistics which states that syntactic transformations do not apply to subparts of words. It functions as a constraint on transformational grammar.

The Integrational theory of language is the general theory of language that has been developed within the general linguistic approach of integrational linguistics.

Lexicalist hypothesis is a hypothesis proposed by Noam Chomsky in which he claims that syntactic transformations only can operate on syntactic constituents. The hypothesis states that the system of grammar that assembles words is separate and different from the system of grammar that assembles phrases out of words.

In linguistics, the syntax–semantics interface is the interaction between syntax and semantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning. Specific topics include scope, binding, and lexical semantic properties such as verbal aspect and nominal individuation, semantic macroroles, and unaccusativity.

References

  1. 1 2 Pustejovsky, J. (2005) Lexical Semantics: Overview in Encyclopedia of Language and Linguistics, second edition, Volumes 1-14
  2. 1 2 Taylor, J. (2017) Lexical Semantics . In B. Dancygier (Ed.), The Cambridge Handbook of Cognitive Linguistics (Cambridge Handbooks in Language and Linguistics, pp. 246-261). Cambridge: Cambridge University Press. doi : 10.1017/9781316339732.017
  3. Pustejovsky, James (1995). The Generative Lexicon. MIT Press. ISBN   9780262661409.
  4. Di Sciullo, Anne-Marie; Williams, Edwin (1987). On the definition of word . Cambridge, MA: MIT press.
  5. Geeraerts, Dirk (2010) Introduction , p. xiv, in Theories of Lexical Semantics
  6. 1 2 3 4 Loos, Eugene; Anderson, Susan; H. Day, Jr., Dwight; Jordan, Paul; Wingate, J. Douglas. "What is a lexical relation?". Glossary of linguistic terms. LinguaLinks.
  7. 1 2 Famer, Pamela B.; Mairal Usón, Ricardo (1999). "Constructing a Lexicon of English Verbs". Functional Grammar (in English) 23 (illustrated ed.). Walter de Gruyter. p. 350. ISBN   9783110164169.
  8. Lehrer, Adrienne (1985). "The influence of semantic fields on semantic change" (PDF). Historical Semantics, Historical Word Formation. Walter de Gruyter. pp. 283–296.
  9. Grandy, Richard E. (2012). "Semantic Fields, Prototypes, and the Lexicon". Frames, Fields, and Contrasts: New Essays in Semantic and Lexical Organization. Routledge. pp. 103–122. ISBN   9781136475801.
  10. Malaia; et al. (2012), "Effects of Verbal Event Structure on Online Thematic Role Assignment", Journal of Psycholinguistic Research, 41 (5): 323–345, doi:10.1007/s10936-011-9195-x, PMID   22120140, S2CID   207201471
  11. 1 2 Pustejovsky, James (2012). "The syntax of event structure" (PDF). Cognition. 41 (1–3): 47–81. doi:10.1016/0010-0277(91)90032-y. PMID   1790655. S2CID   16966452.
  12. 1 2 3 4 5 6 7 8 Sportiche, Dominique; Koopman, Hilda; Stabler, Edward (2014). An Introduction to Syntactic Analysis and Theory. WILEY Blackwell.
  13. Chomsky, Noam (1957). Syntactic Structures. Mouton de Gruyter.
  14. 1 2 Scalise, Sergio; Guevara, Emiliano (1985). "The Lexicalist Approach to Word-Formation".{{cite journal}}: Cite journal requires |journal= (help)
  15. Fodor, Jerry; Lepore, Ernie (Aug 1999). "All at Sea in Semantic Space". The Journal of Philosophy. 96 (8): 381–403. doi:10.5840/jphil199996818. JSTOR   2564628. S2CID   14948287.
  16. Pinker, S. 1989. "Learnability and Cognition: The Acquisition of Argument Structure." Cambridge. MIT Press. pp 89
  17. Harley, Heidi. "Events, agents and the interpretation of VP-shells." (1996).
  18. Kayne, Richard S. The antisymmetry of syntax. No. 25. MIT Press, 1994.
  19. 1 2 Hale, Kenneth; Keyser, Samuel Jay (1993). "On Argument Structures and the Lexical expression of syntactic relations". Essays in Linguistics in Honor of Sylvain Bromberger.
  20. Paul Bennett, 2003. Review of Ken Hale and Samuel Keyser, Prolegomenon to a Theory of Argument Structure. Machine Translation. Vol 18. Issue 1
  21. Halle, Morris; Marantz, Alec (1993), Distributed Morphology and the Pieces of Inflection, The View from Building 20 (Cambridge, MA: MIT Press): 111–176
  22. 1 2 Marantz, Alec. 1997. 'No escape from syntax: Don't try morphological analysis in the privacy of your own Lexicon.' Proceedings of the 21st Annual Penn Linguistics Colloquium: Penn Working Papers in Linguistics
  23. 1 2 Ramchand, Gillian (2008). Verb Meaning and the Lexicon: A First Phase Syntax. Cambridge University Press. ISBN   9780511486319.
  24. 1 2 Lappin, S. (Ed.). (1996). Handbook of contemporary semantic theory. Oxford, UK: Blackwell Publishers.
  25. Loporcaro, M. (2003). The Unaccusative Hypothesis and participial absolutes in Italian: Perlmutter’s generalization revised. Rivista di Linguistica/Italian Journal of Linguistics, 15, 199-263.
  26. 1 2 3 4 Johnson, Kent (2008). "An Overview of Lexical Semantics" (PDF). Philosophy Compass: 119–134.
  27. 1 2 Haspelmath, Martin (1993). "More on the typology of inchoative/causative verb alternations". Causatives and transitivity, edited by Bernard Comrie & Maria Polinsky. Studies in Language Companion Series. Vol. 23. Benjamins. pp. 87–121. doi:10.1075/slcs.23.05has. ISBN   978-90-272-3026-3.{{cite book}}: |journal= ignored (help)
  28. Piñón, Christopher (2001). "A finer look at the causative-inchoative alternation": 346–364.{{cite journal}}: Cite journal requires |journal= (help)
  29. Tham, S (2013). "Change of state verbs and result state adjectives in Mandarin Chinese". Journal of Linguistics. 49 (3): 647–701. doi:10.1017/s0022226713000261.
  30. 1 2 3 4 5 Schafer, Florian (2008). The Syntax of (Anti-)Causatives. John Benjamins Publishing Company. p. 1. ISBN   9789027255099.
  31. 1 2 Kayne, R. (1981). Unambiguous paths. In R. May & F. Koster (Eds.), Levels of syntactic representation (143-184). Cinnaminson, NJ: Foris Publications.
  32. 1 2 3 4 5 6 7 8 Larson, Richard (1988). "On the Double Object Construction". Linguistic Inquiry. 19 (3): 589–632. JSTOR   25164901.
  33. Miyagawa, Shigeru; Tsujioka, Takae (2004). "Argument Structure and Ditransitive Verbs in Japanese". Journal of East Asian Linguistics. 13 (1): 1–38. CiteSeerX   10.1.1.207.6553 . doi:10.1023/b:jeal.0000007345.64336.84. S2CID   122993837.
  34. Bruening, Benjamin (November 2010). "Ditransitive Asymmetries and a Theory of Idiom Formation". Linguistic Inquiry. 41 (4): 519–562. doi:10.1162/LING_a_00012. S2CID   57567192.
  35. 1 2 3 4 5 Sigrid, Beck; Johnson, Kyle (2004). "Double Objects Again" (PDF). Linguistic Inquiry. 35 (1): 97–124. doi:10.1162/002438904322793356. S2CID   18749803.