Acceptability judgment task

Last updated

An acceptability judgment task, also called acceptability rating task, is a common method in empirical linguistics to gather information about the internal grammar of speakers of a language.

Linguistics is the scientific study of language. It involves analysing language form, language meaning, and language in context. The earliest activities in the documentation and description of language have been attributed to the 6th-century-BC Indian grammarian Pāṇini who wrote a formal description of the Sanskrit language in his Aṣṭādhyāyī.

In linguistics, grammar is the set of structural rules governing the composition of clauses, phrases, and words in any given natural language. The term refers also to the study of such rules, and this field includes phonology, morphology, and syntax, often complemented by phonetics, semantics, and pragmatics.

Contents

Acceptability and grammaticality

The goal of acceptability rating studies is to gather insights into the mental grammars of participants. As the grammaticality of a linguistic construction is an abstract construct that cannot be accessed directly, this type of tasks is usually not called grammaticality, but acceptability judgment. This can be compared to intelligence. Intelligence is an abstract construct that cannot be measured directly. What can be measured are the outcomes of specific test items. The result of one item, however, is not very telling. Instead, IQ tests consist of several items building a score. Similarly, in acceptability rating studies, grammatical constructions are measured through several items, i.e., sentences to be rated. This is also done to ensure that participants do not rate the meaning of a particular sentence.

In theoretical linguistics, a speaker's judgement on the well-formedness of a linguistic 'string'—called a grammaticality judgement—is based on whether the sentence is interpreted in accordance with the rules and constraints of the relevant grammar. If the rules and constraints of the particular lect are followed, then the sentence is judged to be grammatical. In contrast, an ungrammatical sentence is one that violates the rules of the given language variety.

Intelligence quotient score derived from tests purported to measure individual differences in human intelligence

An intelligence quotient (IQ) is a total score derived from several standardized tests designed to assess human intelligence. The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book. Historically, IQ is a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score.

The difference between acceptability and grammaticality is linked to the distinction between performance and competence in generative grammar.

Linguistic competence is the system of linguistic knowledge possessed by native speakers of a language. It is distinguished from linguistic performance, which is the way a language system is used in communication. Noam Chomsky introduced this concept in his elaboration of generative grammar, where it has been widely adopted and competence is the only level of language that is studied.

Generative grammar is a linguistic theory that regards grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. Noam Chomsky first used the term in relation to the theoretical linguistics of grammar that he developed in the late 1950s. Linguists who follow the generative approach have been called generativists. The generative school has focused on the study of syntax and addressed other aspects of a language's structure, including morphology and phonology.

Types

Several different types of acceptability rating tasks are used in linguistics. The most common tasks use Likert scales. Forced choice and yes-no rating tasks are also common. Besides these classical test types, there are other, methods like thermometer judgments or magnitude estimation which have been argued to be more difficult to process for participants, however. [1]

Likert scale psychometric measurement scale

A Likert scale is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.

Further reading

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions.

The following outline is provided as an overview of and topical guide to linguistics:

A lexeme is a unit of lexical meaning that exists regardless of the number of inflectional endings it may have or the number of words it may contain. It is a basic abstract unit of meaning. Put more technically, a lexeme is an abstract unit of morphological analysis in linguistics, that roughly corresponds to a set of forms taken by a single word. For example, in English, run, runs, ran and running are forms of the same lexeme, which can be represented as RUN. A related concept is the lemma, which is a particular form of a lexeme that is chosen by convention to represent a canonical form of a lexeme. Lemmas, being a subset of lexemes, are likewise used in dictionaries as the headwords, and other forms of a lexeme are often listed later in the entry if they are not common conjugations of that word.

Phrase Group of words

In everyday speech, a phrase may be any group of words, often carrying a special idiomatic meaning; in this sense it is synonymous with expression. In linguistic analysis, a phrase is a group of words that functions as a constituent in the syntax of a sentence, a single unit within a grammatical hierarchy. A phrase typically appears within a clause, but it is possible also for a phrase to be a clause or to contain a clause within it. There are also types of phrases like noun phrase, prepositional phrase and noun phrase

Theoretical linguistics, or general linguistics, is the branch of linguistics which inquires into the nature of language itself and seeks to answer fundamental questions as to what language is; how it works; how universal grammar (UG) as a domain-specific mental organ operates, if it exists at all; what are its unique properties; how does language relate to other cognitive processes, etc. Theoretical linguists are most concerned with constructing models of linguistic knowledge, and ultimately developing a linguistic theory.

Neurolinguistics

Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.

Zellig Sabbettai Harris was a very influential American linguist, mathematical syntactician, and methodologist of science. Originally a Semiticist, he is best known for his work in structural linguistics and discourse analysis and for the discovery of transformational structure in language. These developments from the first 10 years of his career were published within the first 25. His contributions in the subsequent 35 years of his career include transfer grammar, string analysis, elementary sentence-differences, algebraic structures in language, operator grammar, sublanguage grammar, a theory of linguistic information, and a principled account of the nature and origin of language.

Collocation Frequent occurrence of words next to each other

In corpus linguistics, a collocation is a sequence of words or terms that co-occur more often than would be expected by chance. In phraseology, collocation is a sub-type of phraseme. An example of a phraseological collocation, as propounded by Michael Halliday, is the expression strong tea. While the same meaning could be conveyed by the roughly equivalent powerful tea, this expression is considered excessive and awkward by English speakers. Conversely, the corresponding expression in technology, powerful computer is preferred over strong computer. Phraseological collocations should not be confused with idioms, where an idiom's meaning is derived from its convention as a stand-in for something else while collocation is a mere popular composition. The ability to use English effectively involves an awareness of a distinctive feature of the language known as collocation. Collocation is that behaviour of the language by which two or more words go together, in speech or writing.

Principles and parameters is a framework within generative linguistics in which the syntax of a natural language is described in accordance with general principles and specific parameters that for particular languages are either turned on or off. For example, the position of heads in phrases is determined by a parameter. Whether a language is head-initial or head-final is regarded as a parameter which is either on or off for particular languages. Principles and parameters was largely formulated by the linguists Noam Chomsky and Howard Lasnik. Many linguists have worked within this framework, and for a period of time it was considered the dominant form of mainstream generative linguistics.

<i>Syntactic Structures</i> book by Noam Chomsky

Syntactic Structures is a major work in linguistics by American linguist Noam Chomsky. It was first published in 1957. It introduced the idea of transformational generative grammar. This approach to syntax was fully formal. At its base, this method uses phrase structure rules. These rules break down sentences into smaller parts. Chomsky then combines these with a new kind of rules called "transformations". This procedure gives rise to different sentence structures. Using this limited set of rules, Chomsky aimed to "generate" all and only the grammatical sentences of a given language, which are unlimited in number.

Charles J. Fillmore was an American linguist and Professor of Linguistics at the University of California, Berkeley. He received his Ph.D. in Linguistics from the University of Michigan in 1961. Fillmore spent ten years at The Ohio State University and a year as a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University before joining Berkeley's Department of Linguistics in 1971. Fillmore was extremely influential in the areas of syntax and lexical semantics.

Treebank

In linguistics, a treebank is a parsed text corpus that annotates syntactic or semantic sentence structure. The construction of parsed corpora in the early 1990s revolutionized computational linguistics, which benefitted from large-scale empirical data. The exploitation of treebank data has been important ever since the first large-scale treebank, The Penn Treebank, was published. However, although originating in computational linguistics, the value of treebanks is becoming more widely appreciated in linguistics research as a whole. For example, annotated treebank data has been crucial in syntactic research to test linguistic theories of sentence structure against large quantities of naturally occurring examples.

Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.

Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-called Distributional hypothesis: linguistic items with similar distributions have similar meanings.

The innateness hypothesis is an expression coined by Hilary Putnam to refer to a linguistic theory of language acquisition which holds that at least some knowledge about language exists in humans at birth. Putnam used the expression "the innateness hypothesis" to target linguistic nativism and specifically the views of Noam Chomsky. Facts about the complexity of human language systems, the universality of language acquisition, the facility that children demonstrate in acquiring these systems, and the comparative performance of adults in attempting the same task are all commonly invoked in support. However, the validity of Chomsky's approach is still debated. Empiricists advocate that language is entirely learned. Some have criticized Chomsky's work, pinpointing problems with his theories while others have proposed new theories to account for language acquisition.

Indirect memory tests assess the retention of information without direct reference to the source of information. Participants are given tasks designed to elicit knowledge that was acquired incidentally or unconsciously and is evident when performance shows greater inclination towards items initially presented than new items. Performance on indirect tests may reflect contributions of implicit memory, the effects of priming, a preference to respond to previously experienced stimuli over novel stimuli. Types of indirect memory tests include the implicit association test, the lexical decision task, the word stem completion task, artificial grammar learning, and word fragment completion.

<i>Aspects of the Theory of Syntax</i> book by Noam Chomsky

Aspects of the Theory of Syntax is a book on linguistics written by American linguist Noam Chomsky, first published in 1965. In Aspects, Chomsky presented a deeper, more extensive reformulation of transformational generative grammar (TGG), a new kind of syntactic theory that he had introduced in the 1950s with the publication of his first book, Syntactic Structures. Aspects is widely considered to be the foundational document and a proper book-length articulation of Chomskyan theoretical framework of linguistics. It presented Chomsky's epistemological assumptions with a view to establishing linguistic theory-making as a formal discipline comparable to physical sciences, i.e. a domain of inquiry well-defined in its nature and scope. From a philosophical perspective, it directed mainstream linguistic research away from behaviorism, constructivism, empiricism and structuralism and towards mentalism, nativism, rationalism and generativism, respectively, taking as its main object of study the abstract, inner workings of the human mind related to language acquisition and production.

References

  1. Weskott, T. & Fanselow, G. (2011):On the informativity of different measures of linguistic acceptability. In: Language, 87(2), 249-273.