Emily M. Bender

Last updated
Bender, Emily M. (2000). Syntactic Variation and Linguistic Competence: The Case of AAVE Copula Absence. Stanford University. ISBN   978-0493085425.
  • Sag, Ivan; Wasow, Thomas; Bender, Emily M. (2003). Syntactic theory: A formal introduction. Center for the Study of Language and Information. ISBN   978-1575864006.
  • Bender, Emily M. (2013). Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax. Synthesis Lectures on Human Language Technologies. Springer. ISBN   978-3031010224.
  • Bender, Emily M.; Lascarides, Alex (2019). Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics. Synthesis Lectures on Human Language Technologies. Springer. ISBN   978-3031010446.
  • Articles

    • Bender, Emily (2000). "The Syntax of Mandarin Bă: Reconsidering the Verbal Analysis". Journal of East Asian Linguistics. 9 (2): 105–145. doi:10.1023/A:1008348224800. S2CID   115999663 via Academia.edu.
    • Bender, Emily M.; Flickinger, Dan; Oepen, Stephan (2002). The Grammar Matrix: An open-source starter-kit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. Proceedings of the 2002 workshop on Grammar engineering and evaluation. Vol. 15.
    • Siegel, Melanie; Bender, Emily M. (2002). Efficient deep processing of Japanese. Proceedings of the 3rd workshop on Asian language resources and international standardization. Vol. 12.
    • Goodman, M. W.; Crowgey, J.; Xia, F; Bender, E. M. (2015). "Xigt: Extensible interlinear glossed text for natural language processing". Lang Resources & Evaluation. 49 (2): 455–485. doi:10.1007/s10579-014-9276-1. S2CID   254372685.
    • Xia, Fei; Lewis, William D.; Goodman, Michael Wayne; Slayden, Glenn; Georgi, Ryan; Crowgey, Joshua; Bender, Emily M. (2016). "Enriching A Massively Multilingual database of interlinear glossed text". Lang Resources & Evaluation. 50 (2): 321–349. doi:10.1007/s10579-015-9325-4. S2CID   254379828.
    • Bender, Emily M.; Gebru, Timnit; McMillan-Major and, Angelina; Shmitchell, Shmargaret (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. doi: 10.1145/3442188.3445922 .

    See also

    Related Research Articles

    Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

    Natural language processing (NLP) is an interdisciplinary subfield of computer science and information retrieval. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. To this end, natural language processing often borrows ideas from theoretical linguistics. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

    In linguistics, syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). There are numerous approaches to syntax that differ in their central assumptions and goals.

    Head-driven phrase structure grammar (HPSG) is a highly lexicalized, constraint-based grammar developed by Carl Pollard and Ivan Sag. It is a type of phrase structure grammar, as opposed to a dependency grammar, and it is the immediate successor to generalized phrase structure grammar. HPSG draws from other fields such as computer science and uses Ferdinand de Saussure's notion of the sign. It uses a uniform formalism and is organized in a modular way which makes it attractive for natural language processing.

    Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term parsing comes from Latin pars (orationis), meaning part.

    Ivan Andrew Sag was an American linguist and cognitive scientist. He did research in areas of syntax and semantics as well as work in computational linguistics.

    <span class="mw-page-title-main">John R. Ross</span> American poet and linguist

    John Robert "Haj" Ross is an American poet and linguist. He played a part in the development of generative semantics along with George Lakoff, James D. McCawley, and Paul Postal. He was a professor of linguistics at MIT from 1966 to 1985 and has worked in Brazil, Singapore and British Columbia, and until spring 2021, he taught at the University of North Texas.

    Joan Wanda Bresnan FBA is Sadie Dernham Patek Professor in Humanities Emerita at Stanford University. She is best known as one of the architects of the theoretical framework of lexical functional grammar.

    <span class="mw-page-title-main">Eva Hajičová</span> Czech linguist

    Eva Hajičová [ˈɛva ˈɦajɪt͡ʃovaː] is a Czech linguist, specializing in topic–focus articulation and corpus linguistics. In 2006, she was awarded the Association for Computational Linguistics (ACL) Lifetime Achievement Award. She was named a fellow of the ACL in 2011.

    Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

    Georgia M. Green is an American linguist and academic. She is an emeritus professor at the University of Illinois at Urbana-Champaign. Her research has focused on pragmatics, speaker intention, word order and meaning. She has been an advisory editor for several linguistics journals or publishers and she serves on the usage committee for the American Heritage Dictionary.

    Ellen M. Kaisse is an American linguist. She is professor emerita of linguistics at the University of Washington, best known for her research on the interface between phonology, syntax, and morphology.

    Raffaella Zanuttini is an Italian linguist whose research focuses primarily on syntax and linguistic variation. She is a Professor of Linguistics at Yale University in New Haven, Connecticut.

    Mirella Lapata FRSE is a computer scientist and Professor in the School of Informatics at the University of Edinburgh. Working on the general problem of extracting semantic information from large bodies of text, Lapata develops computer algorithms and models in the field of natural language processing (NLP).

    <span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

    Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

    The usage-based linguistics is a linguistics approach within a broader functional/cognitive framework, that emerged since the late 1980s, and that assumes a profound relation between linguistic structure and usage. It challenges the dominant focus, in 20th century linguistics, on considering language as an isolated system removed from its use in human interaction and human cognition. Rather, usage-based models posit that linguistic information is expressed via context-sensitive mental processing and mental representations, which have the cognitive ability to succinctly account for the complexity of actual language use at all levels. Broadly speaking, a usage-based model of language accounts for language acquisition and processing, synchronic and diachronic patterns, and both low-level and high-level structure in language, by looking at actual language use.

    <span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

    Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

    Mona Talat Diab is a computer science professor and director of Carnegie Mellon University's Language Technologies Institute. Previously, she was a professor at George Washington University and a research scientist with Facebook AI. Her research focuses on natural language processing, computational linguistics, cross lingual/multilingual processing, computational socio-pragmatics, Arabic language processing, and applied machine learning.

    <span class="mw-page-title-main">Vera Demberg</span> Professor of Computer Science and Computational Linguistics

    Vera Demberg is a German computational linguist and professor of computer science and computational linguistics at Saarland University.

    In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process. The term was coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.

    References

    1. 1 2 3 Weil, Elizabeth (2023-03-01). "You Are Not a Parrot". Intelligencer. Retrieved 2023-09-11.
    2. "In Conversation with Emily Menon Bender - Sheila Bender's Writing It Real". 2023-09-07. Retrieved 2023-09-11.
    3. 1 2 3 Baković, Eric (2006-10-04). "Language Log: Speaking of missing words in American history". Language Log. Archived from the original on 2023-05-27. Retrieved 2024-02-12.
    4. 1 2 3 4 "Emily M. Bender". OpenReview. Retrieved 2023-09-11.
    5. "Emily M. Bender | Department of Linguistics | University of Washington". linguistics.washington.edu. Retrieved 2021-11-09.
    6. "Emily M. Bender". University of Washington faculty website. Retrieved February 4, 2023.
    7. Bender, Emily M. (2022-06-14). "Human-like programs abuse our empathy – even Google engineers aren't immune". The Guardian. ISSN   0261-3077 . Retrieved 2023-02-04.
    8. Bender, Emily. "Emily Bender CV" (PDF).
    9. "Emily M. Bender". University of Washington. 2021-11-10. Retrieved 2021-11-10.
    10. "UW Computational Linguistics Master's Degree – Online & Seattle". www.compling.uw.edu. Retrieved 2017-07-19.
    11. "UW Computational Linguistics Lab".
    12. Parvi, Joyce (2019-08-21). "Emily M. Bender is awarded Howard and Frances Nostrand Endowed Professorship for 2019–2021". linguistics.washington.edu. Retrieved 2019-12-08.
    13. "Emily M Bender". The Alan Turing Institute. Retrieved 2021-10-31.
    14. "ACL 2021 Election Results: Congratulations to Emily M. Bender and Mohit Bansal". 2021-11-09. Retrieved 2021-11-10.
    15. "About the ACL". 2024. Retrieved 2024-02-23.
    16. "ACL Officers". 2024-02-05. Retrieved 2024-02-23.
    17. "2022 AAAS Fellows". American Association for the Advancement of Science. Retrieved 2023-08-03.
    18. "Emily M. Bender: Publications". University of Washington faculty website. Retrieved 2021-11-18.
    19. "LinGO Grammar Matrix | Department of Linguistics | University of Washington". linguistics.washington.edu. Retrieved 2017-07-19.
    20. "An open source grammar development environment and broad-coverage English grammar using HPSG" (PDF). LREC. 2000. Archived from the original (PDF) on 2017-08-09. Retrieved 2017-07-19.
    21. Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi: 10.1145/3442188.3445922 . ISBN   978-1-4503-8309-7.
    22. Simonite, Tom. "What Really Happened When Google Ousted Timnit Gebru". Wired. ISSN   1059-1028 . Retrieved 2024-04-02.
    23. Hao, Karen (December 4, 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review.
    24. "Inside a Hot-Button Research Paper: Dr. Emily M. Bender Talks Large Language Models and the Future of AI Ethics". Emerging Tech Brew. Retrieved 2022-09-26.
    25. Bender, Emily M. (2022-05-02). "On NYT Magazine on AI: Resist the Urge to be Impressed". Medium. Retrieved 2022-09-26.
    26. Bender, Emily M.; Koller, Alexander (2020-07-05). "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics: 5185–5198. doi: 10.18653/v1/2020.acl-main.463 . S2CID   211029226.
    Emily M. Bender
    Born1973 (age 5051)
    Known forResearch on the risks of large language models and ethics of NLP; coining the term 'Stochastic parrot'; research on the use of Head-driven phrase structure grammar in computational linguistics
    SpouseVijay Menon [1]
    Parent
    Academic background
    Alma mater UC Berkeley and Stanford University [3] [4]
    Thesis Syntactic variation and linguistic competence: The case of AAVE copula absence  (2000 [3] [4] )
    Doctoral advisor Tom Wasow
    Penelope Eckert [4]