This article needs additional citations for verification .(March 2012) |
FrameNet | |
---|---|
Mission statement | Building a lexical database based on a theory of meaning called frame semantics. |
Commercial? | No (freely available for download) |
Type of project | Lexical database (containing: frames, frame elements(FE), lexical units (LU), examples sentences, and frame relations) |
Location | International Computer Science Institute in Berkeley, California |
Owner | Collin Baker (current project manager) |
Founder | Charles J. Fillmore |
Established | 1997 |
Website | framenet |
FrameNet is a group of online lexical databases based upon the theory of meaning known as Frame semantics, developed by linguist Charles J. Fillmore. The project's fundamental notion is simple: most words' meanings may be best understood in terms of a semantic frame, which is a description of a certain kind of event, connection, or item and its actors.
As an illustration, the act of cooking usually requires the following: a cook, the food being cooked, a container to hold the food while it is being cooked, and a heating instrument. [1] Within FrameNet, this act is represented by a frame named Apply_heat, and its components (Cook, Food, Container, and Heating_instrument), are referred to as frame elements (FEs). The Apply_heat frame also lists a number of words that represent it, known as lexical units (LUs), like fry, bake, boil, and broil.
Other frames are simpler. For example, Placing only has an agent or cause, a theme—something that is placed—and the location where it is placed. Some frames are more complex, like Revenge, which contains more FEs (offender, injury, injured party, avenger, and punishment).[ citation needed ] As in the examples of Apply_heat and Revenge below, FrameNet's role is to define the frames and annotate sentences to demonstrate how the FEs fit syntactically around the word that elicits the frame. [1]
A frame is a schematic representation of a situation involving various participants, props, and other conceptual roles. Examples of frame names are Being_born and Locative_relation. A frame in FrameNet contains a textual description of what it represents (a frame definition), associated frame elements, lexical units, example sentences, and frame-to-frame relations.
Frame elements (FE) provide additional information to the semantic structure of a sentence. Each frame has a number of core and non-core FEs which can be thought of as semantic roles. Core FEs are essential to the meaning of the frame while non-core FEs are generally descriptive (such as time, place, manner, etc.) [2] For example:
FrameNet includes shallow data on syntactic roles that frame elements play in the example sentences. For example, for a sentence like "She was born about AD 460", FrameNet would mark She as a noun phrase referring to the Child frame element, and "about AD 460" as a noun phrase corresponding to the Time frame element. Details of how frame elements can be realized in a sentence are important because this reveals important information about the subcategorization frames as well as possible diathesis alternations (e.g. "John broke the window" vs. "The window broke") of a verb.
Lexical units (LUs) are lemmas, with their part of speech, that evoke a specific frame. In other words, when an LU is identified in a sentence, that specific LU can be associated with its specific frame(s). For each frame, there may be many LUs associated to that frame, and also there may be many frames that share a specific LU; this is typically the case with LUs that have multiple word senses. [2] Alongside the frame, each lexical unit is associated with specific frame elements by means of the annotated example sentences.
For example, lexical units that evoke the Complaining frame (or more specific perspectivized versions of it, to be precise), include the verbs complain, grouse, lament, and others. [5]
Frames are associated with example sentences and frame elements are marked within the sentences. Thus, the sentence
is associated with the frame Being_born, while She is marked as the frame element Child and "about AD 460" is marked as Time. [3]
From the start, the FrameNet project has been committed to looking at evidence from actual language use as found in text collections like the British National Corpus. Based on such example sentences, automatic semantic role labeling tools are able to determine frames and mark frame elements in new sentences.
FrameNet also exposes statistics on the valence of each frame; that is, the number and position of the frame elements within example sentences. The sentence
falls in the valence pattern
which occurs twice in the FrameNet's annotation report for the born.v lexical unit, [3] namely:
FrameNet additionally captures relationships between different frames using relations. These include the following:
FrameNet has proven to be useful in a number of computational applications, because computers need additional knowledge in order to recognize that "John sold a car to Mary" and "Mary bought a car from John" describe essentially the same situation, despite using two quite different verbs, different prepositions and a different word order. FrameNet has been used in applications like question answering, paraphrasing, recognizing textual entailment, and information extraction, either directly or by means of Semantic Role Labeling tools. The first automatic system for Semantic Role Labeling (SRL, sometimes also referred to as "shallow semantic parsing") was developed by Daniel Gildea and Daniel Jurafsky based on FrameNet in 2002. [6] Semantic Role Labeling has since become one of the standard tasks in natural language processing, with the latest version (1.7) of FrameNet now fully supported in the Natural Language Toolkit. [7]
Since frames are essentially semantic descriptions, they are similar across languages, and several projects have arisen over the years that have relied on the original FrameNet as the basis for additional non-English FrameNets, for Spanish, Japanese, German, and Polish, among others.
In linguistics, an affix is a morpheme that is attached to a word stem to form a new word or word form. The main two categories are derivational and inflectional affixes. Derivational affixes, such as un-, -ation, anti-, pre- etc., introduce a semantic change to the word they are attached to. Inflectional affixes introduce a syntactic change, such as singular into plural, or present simple tense into present continuous or past tense by adding -ing, -ed to an English word. All of them are bound morphemes by definition; prefixes and suffixes may be separable affixes.
Word-sense disambiguation is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious.
Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
In general linguistics, a labile verb is a verb that undergoes causative alternation; that is, it can be used both transitively and intransitively, with the requirement that the direct object of its transitive use corresponds to the subject of its intransitive use, as in "I ring the bell" and "The bell rings." Labile verbs are a prominent feature of English, and also occur in many other languages. When causatively alternating verbs are used transitively they are called causatives since, in the transitive use of the verb, the subject is causing the action denoted by the intransitive version. When causatively alternating verbs are used intransitively, they are referred to as anticausatives or inchoatives because the intransitive variant describes a situation in which the theme participant undergoes a change of state, becoming, for example, "rung".
In linguistics, a causative is a valency-increasing operation that indicates that a subject either causes someone or something else to do or be something or causes a change in state of a non-volitional event. Normally, it brings in a new argument, A, into a transitive clause, with the original subject S becoming the object O.
Force dynamics is a semantic category that describes the way in which entities interact with reference to force. Force Dynamics gained a good deal of attention in cognitive linguistics due to its claims of psychological plausibility and the elegance with which it generalizes ideas not usually considered in the same context. The semantic category of force dynamics pervades language on several levels. Not only does it apply to expressions in the physical domain like leaning on or dragging, but it also plays an important role in expressions involving psychological forces. Furthermore, the concept of force dynamics can be extended to discourse. For example, the situation in which speakers A and B argue, after which speaker A gives in to speaker B, exhibits a force dynamic pattern.
According to some linguistics theories, a stative verb is a verb that describes a state of being, in contrast to a dynamic verb, which describes an action. The difference can be categorized by saying that stative verbs describe situations that are static, or unchanging throughout their entire duration, whereas dynamic verbs describe processes that entail change over time. Many languages distinguish between these two types in terms of how they can be used grammatically.
In linguistics, valency or valence is the number and type of arguments and complements controlled by a predicate, content verbs being typical predicates. Valency is related, though not identical, to subcategorization and transitivity, which count only object arguments – valency counts all arguments, including the subject. The linguistic meaning of valency derives from the definition of valency in chemistry. Like valency found in chemistry, there is the binding of specific elements. In the grammatical theory of valency, the verbs organize sentences by binding the specific elements. Examples of elements that would be bound would be the complement and the actant. Although the term originates from valence in chemistry, linguistic valency has a close analogy in mathematics under the term arity.
Cognitive semantics is part of the cognitive linguistics movement. Semantics is the study of linguistic meaning. Cognitive semantics holds that language is part of a more general human cognitive ability, and can therefore only describe the world as people conceive of it. It is implicit that different linguistic communities conceive of simple things and processes in the world differently, not necessarily some difference between a person's conceptual world and the real world.
Charles J. Fillmore was an American linguist and Professor of Linguistics at the University of California, Berkeley. He received his Ph.D. in Linguistics from the University of Michigan in 1961. Fillmore spent ten years at Ohio State University and a year as a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University before joining Berkeley's Department of Linguistics in 1971. Fillmore was influential in the areas of syntax and lexical semantics.
Frame semantics is a theory of linguistic meaning developed by Charles J. Fillmore that extends his earlier case grammar. It relates linguistic semantics to encyclopedic knowledge. The basic idea is that one cannot understand the meaning of a single word without access to all the essential knowledge that relates to that word. For example, one would not be able to understand the word "sell" without knowing anything about the situation of commercial transfer, which also involves, among other things, a seller, a buyer, goods, money, the relation between the money and the goods, the relations between the seller and the goods and the money, the relation between the buyer and the goods and the money and so on. Thus, a word activates, or evokes, a frame of semantic knowledge relating to the specific concept to which it refers.
PropBank is a corpus that is annotated with verbal propositions and their arguments—a "proposition bank". Although "PropBank" refers to a specific corpus produced by Martha Palmer et al., the term propbank is also coming to be used as a common noun referring to any corpus that has been annotated with propositions and their arguments.
In frame semantics, a theory of linguistic meaning, null instantiation is the name of a category used to annotate, or tag, absent semantic constituents or frame elements. Frame semantics, best exemplified by the FrameNet project, views words as evoking frames of knowledge and frames as typically involving multiple components, called frame elements. The term null refers to the fact that the frame element in question is absent. The logical object of the term instantiation refers to the frame element itself. So, null instantiation is an empty instantiation of a frame element. Ruppenhofer and Michaelis postulate an implicational regularity tying the interpretation type of an omitted argument to the frame membership of its predicator: "If a particular frame element role is lexically omissible under a particular interpretation for one LU [lexical unit] in a frame, then for any other LUs in the same frame that allow the omission of this same FE [frame element], the interpretation of the missing FE is the same."
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations".
In certain theories of linguistics, thematic relations, also known as semantic roles, are the various roles that a noun phrase may play with respect to the action or state described by a governing verb, commonly the sentence's main verb. For example, in the sentence "Susan ate an apple", Susan is the doer of the eating, so she is an agent; an apple is the item that is eaten, so it is a patient.
The linguistics wars were extended disputes among American theoretical linguists that occurred mostly during the 1960s and 1970s, stemming from a disagreement between Noam Chomsky and several of his associates and students. The debates started in 1967 when linguists Paul Postal, John R. Ross, George Lakoff, and James D. McCawley —self-dubbed the "Four Horsemen of the Apocalypse"—proposed an alternative approach in which the relation between semantics and syntax is viewed differently, which treated deep structures as meaning rather than syntactic objects. While Chomsky and other generative grammarians argued that meaning is driven by an underlying syntax, generative semanticists posited that syntax is shaped by an underlying meaning. This intellectual divergence led to two competing frameworks in generative semantics and interpretive semantics.
Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.
In natural language processing, semantic role labeling is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result.
In linguistics, subcategorization denotes the ability/necessity for lexical items to require/allow the presence and types of the syntactic arguments with which they co-occur. For example, the word "walk" as in "X walks home" requires the noun-phrase X to be animate.
Syntactic bootstrapping is a theory in developmental psycholinguistics and language acquisition which proposes that children learn word meanings by recognizing syntactic categories and the structure of their language. It is proposed that children have innate knowledge of the links between syntactic and semantic categories and can use these observations to make inferences about word meaning. Learning words in one's native language can be challenging because the extralinguistic context of use does not give specific enough information about word meanings. Therefore, in addition to extralinguistic cues, conclusions about syntactic categories are made which then lead to inferences about a word's meaning. This theory aims to explain the acquisition of lexical categories such as verbs, nouns, etc. and functional categories such as case markers, determiners, etc.