Joan Bresnan | |
---|---|
Born | August 22, 1945 |
Education | |
Occupations |
|
Known for | Lexical functional grammar |
Joan Wanda Bresnan FBA (born August 22, 1945) is Sadie Dernham Patek Professor in Humanities Emerita at Stanford University. [1] She is best known as one of the architects (with Ronald Kaplan) of the theoretical framework of lexical functional grammar. [2]
After graduating from Reed College in 1966 with a degree in philosophy, [1] Bresnan earned her doctorate in linguistics in 1972 at the Massachusetts Institute of Technology, [3] where she studied with Noam Chomsky. [4] In the early and mid 1970s, her work focused on complementation and wh-movement constructions within transformational grammar, and she frequently took positions at odds with those espoused by Chomsky. [5] [6]
Her dissatisfaction with transformational grammar led her to collaborate with Kaplan on a new theoretical framework, lexical-functional grammar, or LFG. [7] A volume of papers written in the new framework and edited by Bresnan, entitled The Mental Representation of Grammatical Relations, appeared in 1982. [8] Since then, Bresnan's work has focused on LFG analyses of various phenomena, primarily in English, Bantu languages, and Australian languages. She has also worked on analyses in optimality theory, and has pursued statistical approaches to linguistics. She has a strong interest in linguistic typology, which has influenced the development of LFG. [9] Additional research interests of hers include dynamics of probabilistic grammar and empirical foundations of syntax. [10] In pursuit of the latter, she established Stanford's Spoken Syntax Lab. [11]
Joan Bresnan was a Guggenheim Fellow in 1975-6 and a Fellow of the Center for Advanced Study in the Behavioral Sciences at Stanford in 1982-3.
She served as the president of the Linguistic Society of America in 1999. [12] She was named a Fellow of the Linguistic Society of America in 2006. [13]
She was elected a Fellow of the American Academy of Arts and Sciences in 2004. [14]
She was honored in August 2005 with a festschrift entitled Architectures, Rules, and Preferences: A Festschrift for Joan Bresnan, published by CSLI Publications in December 2007. [15]
During periods in 2009-2012 she visited Freiburg for collaborative research as an External Fellow at the Freiburg Institute for Advanced Studies. She was elected a Fellow of the Cognitive Science Society in 2012. [16] She was elected as a Corresponding Fellow of the British Academy in 2015. [17]
In 2016, she was selected as the Association for Computational Linguistics Lifetime Achievement Award winner. [18]
In 2023, she was elected to the National Academy of Sciences. [19]
Bresnan has also taught at the University of Massachusetts Amherst and the Massachusetts Institute of Technology as a member of the faculty. [1]
Bresnan wrote an informal and somewhat humorous account of her career and works for her ACL Lifetime Achievement Award. [20]
As of December 16, 2018, Stanford lists forty-four books and papers that Bresnan has either authored or co-authored since 1996. [21] However, she has been publishing since well over a decade before that. An incomplete selection of her particularly influential works appears below. [11]
Phrase structure rules are a type of rewrite rule used to describe a given language's syntax and are closely associated with the early stages of transformational grammar, proposed by Noam Chomsky in 1957. They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories, including both lexical categories and phrasal categories. A grammar that uses phrase structure rules is a type of phrase structure grammar. Phrase structure rules as they are commonly employed operate according to the constituency relation, and a grammar that employs phrase structure rules is therefore a constituency grammar; as such, it stands in contrast to dependency grammars, which are based on the dependency relation.
In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations to produce new sentences from existing ones.
In linguistics, X-bar theory is a model of phrase-structure grammar and a theory of syntactic category formation that was first proposed by Noam Chomsky in 1970 reformulating the ideas of Zellig Harris (1951), and further developed by Ray Jackendoff, along the lines of the theory of generative grammar put forth in the 1950s by Chomsky. It attempts to capture the structure of phrasal categories with a single uniform structure called the X-bar schema, basing itself on the assumption that any phrase in natural language is an XP that is headed by a given syntactic category X. It played a significant role in resolving issues that phrase structure rules had, representative of which is the proliferation of grammatical rules, which is against the thesis of generative grammar.
Lexical functional grammar (LFG) is a constraint-based grammar framework in theoretical linguistics. It posits two separate levels of syntactic structure, a phrase structure grammar representation of word order and constituency, and a representation of grammatical functions such as subject and object, similar to dependency grammar. The development of the theory was initiated by Joan Bresnan and Ronald Kaplan in the 1970s, in reaction to the theory of transformational grammar which was current in the late 1970s. It mainly focuses on syntax, including its relation with morphology and semantics. There has been little LFG work on phonology.
Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists, tend to share certain working assumptions such as the competence–performance distinction and the notion that some domain-specific aspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition.
A symbolic linguistic representation is a representation of an utterance that uses symbols to represent linguistic information about the utterance, such as information about phonetics, phonology, morphology, syntax, or semantics. Symbolic linguistic representations are different from non-symbolic representations, such as recordings, because they use symbols to represent linguistic information rather than measurements.
Theta roles are the names of the participant roles associated with a predicate: the predicate may be a verb, an adjective, a preposition, or a noun. If an object is in motion or in a steady state as the speakers perceives the state, or it is the topic of discussion, it is called a theme. The participant is usually said to be an argument of the predicate. In generative grammar, a theta role or θ-role is the formal device for representing syntactic argument structure—the number and type of noun phrases—required syntactically by a particular verb. For example, the verb put requires three arguments.
Charles J. Fillmore was an American linguist and Professor of Linguistics at the University of California, Berkeley. He received his Ph.D. in Linguistics from the University of Michigan in 1961. Fillmore spent ten years at Ohio State University and a year as a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University before joining Berkeley's Department of Linguistics in 1971. Fillmore was extremely influential in the areas of syntax and lexical semantics.
Barbara Hall Partee is a Distinguished University Professor Emerita of Linguistics and Philosophy at the University of Massachusetts Amherst (UMass). She is known as a pioneer in the field of formal semantics.
Exceptional case-marking (ECM), in linguistics, is a phenomenon in which the subject of an embedded infinitival verb seems to appear in a superordinate clause and, if it is a pronoun, is unexpectedly marked with object case morphology. The unexpected object case morphology is deemed "exceptional". The term ECM itself was coined in the Government and Binding grammar framework although the phenomenon is closely related to the accusativus cum infinitivo constructions of Latin. ECM-constructions are also studied within the context of raising. The verbs that license ECM are known as raising-to-object verbs. Many languages lack ECM-predicates, and even in English, the number of ECM-verbs is small. The structural analysis of ECM-constructions varies in part according to whether one pursues a relatively flat structure or a more layered one.
Glue semantics, or simply Glue, is a linguistic theory of semantic composition and the syntax–semantics interface which assumes that meaning composition is constrained by a set of instructions stated within a formal logic. These instructions, called meaning constructors, state how the meanings of the parts of a sentence can be combined to provide the meaning of the sentence.
Ronald M. Kaplan has served as a vice president at Amazon.com and chief scientist for Amazon Search (A9.com). He was previously vice president and distinguished scientist at Nuance Communications and director of Nuance' Natural Language and Artificial Intelligence Laboratory. Prior to that he served as chief scientist and a principal researcher at the Powerset division of Microsoft Bing. He is also an adjunct professor in the Linguistics Department at Stanford University and a principal of Stanford's Center for the Study of Language and Information (CSLI). He was previously a research fellow at the Palo Alto Research Center, where he was the manager of research in Natural Language Theory and Technology.
Annie Else Zaenen is an adjunct professor of linguistics at Stanford University, California, United States.
Miriam Butt is Professor of Linguistics at the Department of Linguistics at the University of Konstanz, where she leads the computational linguistics lab.
Elizabeth Closs Traugott is an American linguist and Professor Emerita of Linguistics and English, Stanford University. She is best known for her work on grammaticalization, subjectification, and constructionalization.
In linguistics, subcategorization denotes the ability/necessity for lexical items to require/allow the presence and types of the syntactic arguments with which they co-occur. For example, the word "walk" as in "X walks home" requires the noun-phrase X to be animate.
The lexical integrity hypothesis (LIH) or lexical integrity principle is a hypothesis in linguistics which states that syntactic transformations do not apply to subparts of words. It functions as a constraint on transformational grammar.
Mary Dalrymple is a British linguist who is professor of syntax at Oxford University. At Oxford, she is a fellow of Linacre College. Prior to that she was a lecturer in linguistics at King's College London, a senior member of the research staff at the Palo Alto Research Center in the Natural Language Theory and Technology group and a computer scientist at SRI International.
Rachel Nordlinger is an Australian linguist and a professor at The University of Melbourne.
Jane Barbara Grimshaw is a Distinguished Professor [emerita] in the Department of Linguistics at Rutgers University-New Brunswick. She is known for her contributions to the areas of syntax, optimality theory, language acquisition, and lexical representation.
{{cite book}}
: CS1 maint: others (link){{cite book}}
: CS1 maint: others (link)