Part of a series on |
Linguistics |
---|
Portal |
In linguistics, linguistic competence is the system of unconscious knowledge that one knows when they know a language. It is distinguished from linguistic performance, which includes all other factors that allow one to use one's language in practice.
In approaches to linguistics which adopt this distinction, competence would normally be considered responsible for the fact that "I like ice cream" is a possible sentence of English, the particular proposition that it denotes, and the particular sequence of phones that it consists of. Performance, on the other hand, would be responsible for the real-time processing required to produce or comprehend it, for the particular role it plays in a discourse, and for the particular sound wave one might produce while uttering it.
The distinction is widely adopted in formal linguistics, where competence and performance are typically studied independently. However, it is not used in other approaches including functional linguistics and cognitive linguistics, and it has been criticized in particular for turning performance into a wastebasket for hard-to-handle phenomena.
Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its (the speech community's) language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance. ~Chomsky,1965 [1] (p. 3)
Chomsky differentiates competence, which is an idealized capacity, from performance being the production of actual utterances. According to him, competence is the ideal speaker-hearer's knowledge of his or her language and it is the 'mental reality' which is responsible for all those aspects of language use which can be characterized as 'linguistic'. [2] [ page needed ] Chomsky argues that only under an idealized situation whereby the speaker-hearer is unaffected by grammatically irrelevant conditions such as memory limitations and distractions will performance be a direct reflection of competence. A sample of natural speech consisting of numerous false starts and other deviations will not provide such data. Therefore, he claims that a fundamental distinction has to be made between the competence and performance. [1] [ page needed ]
Chomsky dismissed criticisms of delimiting the study of performance in favor of the study of underlying competence, as unwarranted and completely misdirected. He claims that the descriptivist limitation-in-principle to classifying and organizing data, the practice of "extracting patterns" from a corpus of observed speech, and the describing of "speech habits" are core factors precluding the development of a theory of actual performance. [1] [ page needed ]
Linguistic competence is treated as a more comprehensive term for lexicalists, such as Jackendoff and Pustejovsky, within the generative school of thought. They assume a modular lexicon, a set of lexical entries containing semantic, syntactic and phonological information deemed necessary to parse a sentence. [3] [4] In the generative lexicalist view this information is intimately tied up with linguistic competence. Nevertheless, their models are still in line with the mainstream generative research in adhering to strong innateness, modularity and autonomy of syntax. [5]
Ray S. Jackendoff's model deviates from the traditional generative grammar in that it does not treat syntax as the main generative component from which meaning and phonology is developed unlike Chomsky. According to him, a generative grammar consists of five major components: the lexicon, the base component, the transformational component, the phonological component and the semantic component. [nb 1] [6] Against the syntax-centered view of generative grammar(syntactocentrism), he specifically treats phonology, syntax and semantics as three parallel generative processes, coordinated through interface processes. He further subdivides each of those three processes into various "tiers", themselves coordinated by interfaces. Yet, he clarifies that those interfaces are not sensitive to every aspect of the processes they coordinate. For instance, phonology is affected by some aspects of syntax, but not vice versa.
In contrast to the static view of word meaning (where each word is characterized by a predetermined number of word senses) which imposes a tremendous bottleneck on the performance capability of any natural language processing system, Pustejovsky proposes that the lexicon becomes an active and central component in the linguistic description. The essence of his theory is that the lexicon functions generatively, first by providing a rich and expressive vocabulary for characterizing lexical information; then, by developing a framework for manipulating fine-grained distinctions in word descriptions; and finally, by formalizing a set of mechanisms for specialized composition of aspects of such descriptions of words, as they occur in context, extended and novel sense are generated. [7]
Katz and Fodor suggests that a grammar should be thought of as a system of rules relating the externalized form of the sentences of a language to their meanings that are to be expressed in a universal semantic representation, just as sounds are expressed in a universal semantic representation. They hope that by making semantics an explicit part of generative grammar, more incisive studies of meaning would be possible. Since they assume that semantic representations are not formally similar to syntactic structure, they suggest a complete linguistic description must therefore include a new set of rules, a semantic component, to relate meanings to syntactic and/or phonological structure. Their theory can be reflected by their slogan "linguistic description minus grammar equals semantics". [6] [8]
A broad front of linguists have critiqued the notion of linguistic competence, often severely. Functionalists, who argue for a usage-based approach to linguistics, argue that linguistic competence is derived from and informed by language use, performance, taking the directly opposite view to the generative model. [9] [10] As a result, in functionalist theories, emphasis is placed on experimental methods to understand the linguistic competence of individuals.
Sociolinguists have argued that the competence/performance distinction basically serves to privilege data from certain linguistic genres and socio-linguistic registers as used by the prestige group, while discounting evidence from low-prestige genres and registers as being simply mis-performance. [11]
Noted linguist John Lyons, who works on semantics, has said:
Dell Hymes, quoting Lyons as above, says that "probably now there is widespread agreement" with the above statement. [13]
Many linguists including M.A.K. Halliday and Labov have argued that the competence/performance distinction makes it difficult to explain language change and grammaticalization, which can be viewed as changes in performance rather than competence. [14]
Another critique of the concept of linguistic competence is that it does not fit the data from actual usage where the felicity of an utterance often depends largely on the communicative context. [15] [16]
Neurolinguist Harold Goodglass has argued that performance and competence are intertwined in the mind, since, "like storage and retrieval, they are inextricably linked in brain damage." [17]
Cognitive Linguistics is a loose collection of systems that gives more weightage to semantics, and considers all usage phenomenon including metaphor and language change. Here, a number of pioneers such as George Lakoff, Ronald Langacker, and Michael Tomasello have strongly opposed the competence-performance distinction. The text by Vyvyan Evans and Melanie Green write:
"In rejecting the distinction between competence and performance cognitive linguists argue that knowledge of language is derived from patterns of language use, and further, that knowledge of language is knowledge of how language is used." p. 110 [18]
Numerous experiments on infants in the last two decades have shown that they are able to segment words (frequently co-occurring sound sequences) from other sounds in a stream of meaningless syllables. [19] This together with computational results that recurrent neural networks can learn syntax-like patterns, [20] resulted in a wide questioning of nativist assumptions underlying psycholinguistic work up to the nineties. [21]
According to experimental linguist N.S. Sutherland, the task of psycholinguistics is not to confirm Chomsky's account of linguistic competence by undertaking experiments. It is by doing experiments, to find out what are the mechanisms that underlie linguistic competence. [22] Psycholinguistics generally reject the distinction between performance and competence. [23]
Psycholinguists have also decried the competence-performance distinction on the ability to model dialogue:
The narrow definition of competence espoused by generativists resulted in the field of pragmatics where concerns other than language have become dominant. This has resulted in a more inclusive notion called communicative competence, to include social aspects – as proposed by Dell Hymes. [25] [26] This situation has had some unfortunate side effects:
The major criticism towards Chomsky's notion of linguistic competence by Hymes is the inadequate distinction of competence and performance. Furthermore, he commented that it is unreal and that no significant progress in linguistics is possible without studying forms along with the ways in which they are used. As such, linguistic competence should fall under the domain of communicative competence since it comprises four competence areas, namely, linguistic, sociolinguistic, discourse and strategic. [28]
Linguistic competence is commonly used and discussed in many language acquisition studies. Some of the more common ones are in the language acquisition of children, aphasics and multilinguals.
The Chomskyan view of language acquisition argues that humans have an innate ability – universal grammar – to acquire language. [29] However, a list of universal aspects underlying all languages has been hard to identify.
Another view, held by scientists specializing in Language acquisition, such as Tomasello, argues that young children's early language is concrete and item-based which implies that their speech is based on the lexical items known to them from the environment and the language of their caretakers. In addition, children do not produce creative utterances about past experiences and future expectations because they have not had enough exposure to their target language to do so. Thus, this indicates that the exposure to language plays more of a role in a child's linguistic competence than just their innate abilities. [30]
Aphasia refers to a family of clinically diverse disorders that affect the ability to communicate by oral or written language, or both, following brain damage. [31] In aphasia, the inherent neurological damage is frequently assumed to be a loss of implicit linguistic competence that has damaged or wiped out neural centers or pathways that are necessary for maintenance of the language rules and representations needed to communicate. The measurement of implicit language competence, although apparently necessary and satisfying for theoretic linguistics, is complexly interwoven with performance factors. Transience, stimulability, and variability in aphasia language use provide evidence for an access deficit model that supports performance loss. [32]
The definition of a multilingual [nb 2] is one that has not always been very clear-cut. In defining a multilingual, the pronunciation, morphology and syntax used by the speaker in the language are key criteria used in the assessment. Sometimes the mastery of the vocabulary is also taken into consideration but it is not the most important criteria as one can acquire the lexicon in the language without knowing the proper use of it.
When discussing the linguistic competence of a multilingual, both communicative competence and grammatical competence are often taken into consideration as it is imperative for a speaker to have the knowledge to use language correctly and accurately. To test for grammatical competence in a speaker, grammaticality judgments of utterances are often used. Communicative competence on the other hand, is assessed through the use of appropriate utterances in different setting. [33]
Language is often implicated in humor. For example, the structural ambiguity of sentences is a key source for jokes. Take Groucho Marx's line from Animal Crackers: "One morning I shot an elephant in my pajamas; how he got into my pajamas I'll never know." The joke is funny because the main sentence could theoretically mean either that (1) the speaker, while wearing pajamas, shot an elephant or (2) the speaker shot an elephant that was inside his pajamas. [34]
Propositions by linguists such as Victor Raskin and Salvatore Attardo have been made stating that there are certain linguistic mechanisms (part of our linguistic competence) underlying our ability to understand humor and determine if something was meant to be a joke. Raskin puts forth a formal semantic theory of humor, which is now widely known as the semantic script theory of humor (SSTH). The semantic theory of humour is designed to model the native speaker's intuition with regard to humor or, in other words, his humor competence. The theory models and thus defines the concept of funniness and is formulated for an ideal speaker-hearer community i.e. for people whose senses of humor are exactly identical. Raskin's semantic theory of humor consists of two components – the set of all scripts available to speakers and a set of combinatorial rules. The term "script" used by Raskin in his semantic theory is used to refer to the lexical meaning of a word. The function of the combinatorial rules is then to combine all possible meaning of the scripts. Hence, Raskin posits that these are the two components which allows us to interpret humor. [35]
The following outline is provided as an overview and topical guide to linguistics:
In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations to produce new sentences from existing ones.
Lexical semantics, as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
The concept of communicative competence, as developed in linguistics, originated in response to perceived inadequacy of the notion of linguistic competence. That is, communicative competence encompasses a language user's grammatical knowledge of syntax, morphology, phonology and the like, but reconceives this knowledge as a functional, social understanding of how and when to use utterances appropriately.
Generative grammar, or generativism, is a linguistic theory that regards linguistics as the study of a hypothesised innate grammatical structure. It is a biological or biologistic modification of earlier structuralist theories of linguistics, deriving from logical syntax and glossematics. Generative grammar considers grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. It is a system of explicit rules that may apply repeatedly to generate an indefinite number of sentences which can be as long as one wants them to be. The difference from structural and functional models is that the object is base-generated within the verb phrase in generative grammar. This purportedly cognitive structure is thought of as being a part of a universal grammar, a syntactic structure which is caused by a genetic mutation in humans.
Ray Jackendoff is an American linguist. He is professor of philosophy, Seth Merrin Chair in the Humanities and, with Daniel Dennett, co-director of the Center for Cognitive Studies at Tufts University. He has always straddled the boundary between generative linguistics and cognitive linguistics, committed to both the existence of an innate universal grammar and to giving an account of language that is consistent with the current understanding of the human mind and cognition.
Theta roles are the names of the participant roles associated with a predicate: the predicate may be a verb, an adjective, a preposition, or a noun. If an object is in motion or in a steady state as the speakers perceives the state, or it is the topic of discussion, it is called a theme. The participant is usually said to be an argument of the predicate. In generative grammar, a theta role or θ-role is the formal device for representing syntactic argument structure—the number and type of noun phrases—required syntactically by a particular verb. For example, the verb put requires three arguments.
Conceptual semantics is a framework for semantic analysis developed mainly by Ray Jackendoff in 1976. Its aim is to provide a characterization of the conceptual elements by which a person understands words and sentences, and thus to provide an explanatory semantic representation. Explanatory in this sense refers to the ability of a given linguistic theory to describe how a component of language is acquired by a child.
In linguistics and social sciences, markedness is the state of standing out as nontypical or divergent as opposed to regular or common. In a marked–unmarked relation, one term of an opposition is the broader, dominant one. The dominant default or minimum-effort form is known as unmarked; the other, secondary one is marked. In other words, markedness involves the characterization of a "normal" linguistic unit against one or more of its possible "irregular" forms.
Biolinguistics can be defined as the study of biology and the evolution of language. It is highly interdisciplinary as it is related to various fields such as biology, linguistics, psychology, anthropology, mathematics, and neurolinguistics to explain the formation of language. It seeks to yield a framework by which we can understand the fundamentals of the faculty of language. This field was first introduced by Massimo Piattelli-Palmarini, professor of Linguistics and Cognitive Science at the University of Arizona. It was first introduced in 1971, at an international meeting at the Massachusetts Institute of Technology (MIT).
The term linguistic performance was used by Noam Chomsky in 1960 to describe "the actual use of language in concrete situations". It is used to describe both the production, sometimes called parole, as well as the comprehension of language. Performance is defined in opposition to "competence"; the latter describes the mental knowledge that a speaker or listener has of language.
Joan Lea Bybee is an American linguist and professor emerita at the University of New Mexico. Much of her work concerns grammaticalization, stochastics, modality, morphology, and phonology. Bybee is best known for proposing the theory of usage-based phonology and for her contributions to cognitive and historical linguistics.
In linguistics, well-formedness is the quality of a clause, word, or other linguistic element that conforms to the grammar of the language of which it is a part. Well-formed words or phrases are grammatical, meaning they obey all relevant rules of grammar. In contrast, a form that violates some grammar rule is ill-formed and does not constitute part of the language.
Functional grammar (FG) and functional discourse grammar (FDG) are grammar models and theories motivated by functional theories of grammar. These theories explain how linguistic utterances are shaped, based on the goals and knowledge of natural language users. In doing so, it contrasts with Chomskyan transformational grammar. Functional discourse grammar has been developed as a successor to functional grammar, attempting to be more psychologically and pragmatically adequate than functional grammar.
Aspects of the Theory of Syntax is a book on linguistics written by American linguist Noam Chomsky, first published in 1965. In Aspects, Chomsky presented a deeper, more extensive reformulation of transformational generative grammar (TGG), a new kind of syntactic theory that he had introduced in the 1950s with the publication of his first book, Syntactic Structures. Aspects is widely considered to be the foundational document and a proper book-length articulation of Chomskyan theoretical framework of linguistics. It presented Chomsky's epistemological assumptions with a view to establishing linguistic theory-making as a formal discipline comparable to physical sciences, i.e. a domain of inquiry well-defined in its nature and scope. From a philosophical perspective, it directed mainstream linguistic research away from behaviorism, constructivism, empiricism and structuralism and towards mentalism, nativism, rationalism and generativism, respectively, taking as its main object of study the abstract, inner workings of the human mind related to language acquisition and production.
Lectures on Government and Binding: The Pisa Lectures (LGB) is a book by the linguist Noam Chomsky, published in 1981. It is based on the lectures Chomsky gave at the GLOW conference and workshop held at the Scuola Normale Superiore in Pisa, Italy, in 1979. In this book, Chomsky presented his government and binding theory of syntax. It had great influence on the syntactic research in early 1980s, especially among the linguists working within the transformational grammar framework.
In linguistics, transformational syntax is a derivational approach to syntax that developed from the extended standard theory of generative grammar originally proposed by Noam Chomsky in his books Syntactic Structures and Aspects of the Theory of Syntax. It emerged from a need to improve on approaches to grammar in structural linguistics.
The Integrational theory of language is the general theory of language that has been developed within the general linguistic approach of integrational linguistics.
In linguistics, the autonomy of syntax is the assumption that syntax is arbitrary and self-contained with respect to meaning, semantics, pragmatics, discourse function, and other factors external to language. The autonomy of syntax is advocated by linguistic formalists, and in particular by generative linguistics, whose approaches have hence been called autonomist linguistics.
Michael K. Brame was an American linguist known for his contributions to the field. He served as a professor at the University of Washington and was the founding editor of the peer-reviewed research journal, Linguistic Analysis. Brame's work focused on the development of recursive categorical syntax, also referred to as algebraic syntax, which integrated principles from algebra and category theory to analyze sentence structure and linguistic relationships. His framework challenged conventional transformational grammar by advocating for a lexicon-centered approach and emphasizing the connections between words and phrases. Additionally, Brame collaborated with his wife on research investigating the identity of the author behind the name "William Shakespeare", resulting in several publications. His legacy is marked by his impactful contributions to linguistic theory and his exploration of language intricacies.
{{citation}}
: CS1 maint: multiple names: authors list (link) Retrieved on November 17, 2010