|Born||August 29, 1927|
|Died||March 13, 2017 89) (aged|
Waikoloa Beach, Hawaii
|Alma mater||University of Tokyo|
|Known for||One of the pioneers of speech science|
|Fields||Physics, phonetics, and linguistics|
|Institutions||University of Tokyo|
Osamu Fujimura 藤村靖 (August 29, 1927 in Tokyo – March 13, 2017 in Waikoloa Beach, Hawaii) was a Japanese physicist, phonetician and linguist, recognized as one of the pioneers of speech science. Fujimura was also known for his influential work in the diverse field of speech-related studies including acoustics, phonetics/phonology, instrumentation techniques, speech production mechanisms, and computational/theoretical linguistics. After getting his Doctorate of Science from the University of Tokyo through the research he conducted at MIT, Fujimura served as Director and Professor at the Research Institute of Logopedics and Phoniatrics (RILP), at the University of Tokyo from 1965 to 1973. He then continued his research at Bell Labs in Murray Hill, New Jersey, in the U.S., from 1973 to 1988 as a Department Head, working for Max Mathews. He moved his research to The Ohio State University where he was Professor and Department Head for Speech and Hearing Science. He was named Professor Emeritus in 2013. He was a Fellow of the American Association for the Advancement of Sciences.
Fujimura's career as a scientist spanned nearly three quarters of a century. He authored, co-authored or edited over 256 scientific publications covering a vast range of topics including physics, speech acoustics and articulation, phonology, kanji transcription methods, syntax, and more. These included 11 books and monographs, 64 journal articles, 58 articles or chapters in books, 56 proceedings articles, 42 miscellaneous writings and 25 articles in RILP.
Fujimura’s work covers all aspects of phonetics, with a focus on speech articulation, analysis of acoustic phonetics, and speech perception. Fujimura and his colleagues introduced X-ray technologies to study human articulation patterns.The X-ray macrobeam speech corpus is considered to be an important research resource for modern phonetic research. His work contributed to the foundation of modern acoustic analyses of speech sounds, especially the acoustics of nasal consonants, proposing the notion of the “anti-formant”. His work also showed that consonant-to-vowel transition is perceptually more salient than vowel-to-consonant transition. In addition to his contribution to phonetic science, he wrote a review of “Syntactic Structures” by Noam Chomsky in 1963, thereby contributing to the introduction of generative linguistics in Japan. Later in his career, he proposed a model of speech articulation called “the C/D model”, in which phonological featural specifications are “Converted” and “Distributed” to several articulators. The C/D model is an explicit theory of how mental, phonological information is mapped onto actual physiological articulatory commands. This theory is currently being pursued by a number of phoneticists.
His first position was Research Assistant at The Kobayashi Institute of Physical Research, Kokubunzi, Tokyo from 1952 – 1958. He then served as Assistant Professor at the Research Laboratory of Communication Science in the University of Electrocommunications at Chōfu, Tokyo from 1958 to 1965. From 1958 to 1961 he worked at MIT as Division of Sponsored Research staff member at the Research Laboratory of Electronics (Speech Communication Group). At MIT he was supervised by Drs. Morris Halle and K. N. Stevens. This was followed by two years (1963 – 1965) as a Guest Researcher at the Royal Institute of Technology, Stockholm, Sweden, where he was supervised by Dr. Gunnar Fant. During this time, he conducted research that contributed to the foundation of modern acoustic analyses.
He obtained his D.Sc in Physics from the University of Tokyo in 1962. Starting in 1965, he served as a professor at the Research Institute of Logopedics and Phoniatrics in the Faculty of Medicine at the University of Tokyo. He served as the director of the Institute between 1969 and 1973, during which time he published many important phonetic research papers. Concurrently in 1973, he also was Adjunct Professor, Dept. of Linguistics, Faculty of Letters, at the University of Tokyo, and also Chair of the Graduate Course in Physiology (in the Division of Medicine), the University of Tokyo. It was during this time that RILP became an active research center for speech science studies, focusing on developing highly advanced techniques and tools for studying articulation of speech, including fiberoptics, EMG (electromyography) and the X-Ray Microbeam. Some studies conducted at RILP during this time are considered to be foundational to modern phonetics science, and still cited in current phonetics papers.
In 1973, he moved to AT&T Bell Labs in Murray Hill, NJ, USA. At Bell Labs he served as the head of the Department of Linguistics and Speech Analysis Research until 1984, the head of Department of Linguistics and Artificial Intelligence Research until 1987, and the head of Department of Artificial Intelligence Research until 1988. During this time Fujimura worked with a number of scientists and is remembered for encouraging young researchers including Mark Liberman, Janet Pierrehumbert, William Poser, Mary Beckman, Marian Macchi, Sue Hertz, Jan Edwards, and Julia Hirschberg. Fujimura’s broad vision about the entire field of linguistics is evident in his impact on post-doc researchers at Bell Labs such as John McCarthy, a formal phonologist, and Barbara Partee, a formal semanticist.
In 1988, Fujimura moved to the Department of Speech & Hearing Science at The Ohio State University where he worked until retiring as a Professor Emeritus in 2003. During his time at OSU, he was also a Member at the Center for Cognitive Science (1988 to 2003), and a Participating Professor at the Biomedical Engineering Center (1992 to 2003). In addition he was a periodic Guest Researcher at ATR/HIP in Japan from 1992 to 1996. From 1997 to 1998 he took sabbatical leave from OSU to be a Japan Society for the Promotion of Science Invitation Fellow at the Research Institute of Asian and African Languages and Cultures at the Tokyo University for Foreign Studies.
Fujimura served as a fellow for the International Institute for Advanced Studies from 2004 to 2006. It was during this time that Fujimura began to formulate the C/D model of speech articulation while mentoring researchers such as Reiner Wilhelms Tricarico, Chao-Min Wu, Donna Erickson, Kerrie Beechler Obert, Caroline Menezez, and Bryan Pardo.
After retirement from OSU, he was a researcher at the Center of Excellence (COE), Nagoya University from 2003 to 2004 working with professors K. Kakehi & F. Itakura. Fujimura then served as fellow at the International Institute for Advanced Studies, Kyoto, Japan from April 2004 to August 2006.
Fujimura believed strongly diversity and inclusion in science. Through mentorship and encouragement Fujimura aided a younger generation of speech scientists. He encouraged the young generation to “Pay it Forward” with their own junior researchers, creating a perpetual positive cycle.
As a basic researcher pioneering work on speech synthesis, Fujimura did not frequently patent his inventions.
Fujimura did patent his Speech transmission system from 1978, US 4170719 A. This machine created speech synthesis with voiced and unvoiced sounds produced differently.
One of his creations was the computer-tracking-based X-Ray microbeam system for recording human utterances. The first version of the machine was at University of Tokyo, built by JEOL (Nihon-Denshi KK). The second version was built at University of Wisconsin and was in use until 2009. They used extremely low doses of X-ray to track the movement of the tongue and oral chamber in order to study how humans uttered sounds. Both machines were used by generations of researchers to discover and to verify theories of human speech generation.
US Patent 4426722 A covers the electron source for the X Ray machine.
Osamu Fujimura was born August 29, 1927. The Fujimura family is descended from the Miyamoto clan (源氏), remotely related to the samurai Minamoto Yoritomo (源頼朝), who founded the Kamakura Bakufu (military government site) as a Shōgun (将軍) in the 12th century. Yoritomo's grave is behind the Hachimangu 八幡宮 in Yukinoshita, Kamakura. He was survived by his second wife J.C. Williams, and four sons, Akira, Makoto, Wataru, and Itaru.
Approximants are speech sounds that involve the articulators approaching each other but not narrowly enough nor with enough articulatory precision to create turbulent airflow. Therefore, approximants fall between fricatives, which do produce a turbulent airstream, and vowels, which produce no turbulence. This class is composed of sounds like and semivowels like and, as well as lateral approximants like.
Phonetics is a branch of linguistics that studies how humans make and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Phoneticians—linguists who specialize in phonetics—study the physical properties of speech. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how different movements affect the properties of the resulting sound, or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language—which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones.
Phonology is a branch of linguistics that studies how languages or dialects systematically organize their sounds. The term also refers to the sound system of any particular language variety. At one time, the study of phonology only related to the study of the systems of phonemes in spoken languages. Now it may relate to
Linguistics is the scientific study of human language. Someone who engages in this study is called a linguist. See also the Outline of linguistics, the List of phonetics topics, the List of linguists, and the List of cognitive science topics. Articles related to linguistics include:
The voiced alveolar approximant is a type of consonantal sound used in some spoken languages. The symbol in the International Phonetic Alphabet that represents the alveolar and postalveolar approximants is ⟨ɹ⟩, a lowercase letter r rotated 180 degrees. The equivalent X-SAMPA symbol is
Acoustic phonetics is a subfield of phonetics, which deals with acoustic aspects of speech sounds. Acoustic phonetics investigates time domain features such as the mean squared amplitude of a waveform, its duration, its fundamental frequency, or frequency domain features such as the frequency spectrum, or even combined spectrotemporal features and the relationship of these properties to other branches of phonetics, and to abstract linguistic concepts such as phonemes, phrases, or utterances.
In linguistics, fortis and lenis, sometimes identified with tense and lax, are pronunciations of consonants with relatively greater and lesser energy. English has fortis consonants, such as the p in pat, with a corresponding lenis consonant, such as the b in bat. Fortis and lenis consonants may be distinguished by tenseness or other characteristics, such as voicing, aspiration, glottalization, velarization, length, and length of nearby vowels. Fortis and lenis were coined for languages where the contrast between sounds such as p and b does not involve voicing.
Peter Nielsen Ladefoged was a British linguist and phonetician.
In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.
Speech is human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words, and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin, physical states, psychic states, physico-psychic stares states, education or experience, and the like.
Patricia Ann Keating is an American linguist and noted phonetician. She received her PhD in Linguistics at Brown University in 1980. Since 1980 she has been on the faculty of the Linguistics Department at University of California, Los Angeles. She became a Full Professor and director of the UCLA Phonetics Laboratory in 1991.
Kenneth Noble Stevens was the Clarence J. LeBel Professor of Electrical Engineering and Computer Science, and Professor of Health Sciences and Technology at the Research Laboratory of Electronics at MIT. Stevens was head of the Speech Communication Group in MIT's Research Laboratory of Electronics (RLE), and was one of the world's leading scientists in acoustic phonetics.
Clinical linguistics is a sub-discipline of applied linguistics involved in the description, analysis, and treatment of language disabilities, especially the application of linguistic theory to the field of Speech-Language Pathology. The study of the linguistic aspect of communication disorders is of relevance to a broader understanding of language and linguistic theory.
In some schools of phonetics, sounds are distinguished as grave or acute. This is primarily a perceptual classification, based on whether the sounds are perceived as sharp, high intensity, or as dull, low intensity. However, it can also be defined acoustically or in terms of the articulations involved.
Jennifer Sandra Cole is a professor of linguistics at Northwestern University. Her research uses experimental and computational methods to study the sound structure of language. She is General Editor of Laboratory Phonology and a founding member of the Association for Laboratory Phonology.
Julie Beth Lovins was a computational linguist who first published a stemming algorithm for word matching in 1968.
The Lovins Stemmer is a single pass, context sensitive stemmer, which removes endings based on the longest-match principle. The stemmer was the first to be published and was extremely well developed considering the date of its release and has been the main influence on a large amount of the future work in the area. -Adam G., et al
Speech acquisition focuses on the development of spoken language by a child. Speech consists of an organized set of sounds or phonemes that are used to convey meaning while language is an arbitrary association of symbols used according to prescribed rules to convey meaning. While grammatical and syntactic learning can be seen as a part of language acquisition, speech acquisition focuses on the development of speech perception and speech production over the first years of a child's lifetime. There are several models to explain the norms of speech sound or phoneme acquisition in children.
Janet Fletcher is an Australian linguist. She completed her BA at the University of Queensland in 1981 and then moved to the United Kingdom and received her PhD from the University of Reading in 1989.
Electromagnetic articulography (EMA) is a method of measuring the position of parts of the mouth. EMA uses sensor coils placed on the tongue and other parts of the mouth to measure their position and movement over time during speech and swallowing. Induction coils around the head produce an electromagnetic field that creates, or induces, a current in the sensors in the mouth. Because the current induced is inversely proportional to the cube of the distance, a computer is able to analyse the current produced and determine the sensor coil's location in space.
Phonetic space is the range of sounds that can be made by an individual. There is some controversy over whether an individual's phonetic space is language dependent, or if there exists some common, innate, phonetic space across languages.