Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech, [1] [2] as distinguished from manual assessment by an instructor or proctor. [3] Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction.
Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners, [4] [5] sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and syllable and word stress. [6] Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams [7] and from Amira Learning. [8] Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia. [9]
The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility, [10] a shortcoming corrected in 2011 at the Toyohashi University of Technology, [11] and included in the Versant high-stakes English fluency assessment from Pearson [12] and mobile apps from 17zuoye Education & Technology, [13] but still missing in 2023 products from Google Search, [14] Microsoft, [15] Educational Testing Service, [16] Speechace, [17] and ELSA. [18] Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments; [19] [20] [21] from words with multiple correct pronunciations; [22] and from phoneme coding errors in machine-readable pronunciation dictionaries. [23] In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels. [24]
In 2022, researchers found that some newer speech to text systems, based on end-to-end reinforcement learning to map audio signals directly into words, produce word and phrase confidence scores closely correlated with genuine listener intelligibility. [25] In 2023, others were able to assess intelligibility using dynamic time warping based distance from Wav2Vec2 representation of good speech. [26]
Although there are as yet no industry-standard benchmarks for evaluating pronunciation assessment accuracy, researchers occasionally release evaluation speech corpuses for others to use for improving assessment quality. [27] [28] Such evaluation databases often emphasize formally unaccented pronunciation to the exclusion of genuine intelligibility evident from blinded listener transcriptions. [5]
Some promising areas for improvement being developed in 2024 include articulatory feature extraction [29] [30] [31] and transfer learning to suppress unnecessary corrections. [32] Other interesting advances under development include "augmented reality" interfaces for mobile devices using optical character recognition to provide pronunciation training on text found in user environments. [33] [34] As of mid-2024, audio multimodal large language models have been used to assess pronunciation. [35]
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition.
Word-sense disambiguation is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious.
In speech communication, intelligibility is a measure of how comprehensible speech is in given conditions. Intelligibility is affected by the level and quality of the speech signal, the type and level of background noise, reverberation, and, for speech over communication devices, the properties of the communication system. A common standard measurement for the quality of the intelligibility of speech is the Speech Transmission Index (STI). The concept of speech intelligibility is relevant to several fields, including phonetics, human factors, acoustical engineering, and audiometry.
The Versant suite of tests are computerized tests of spoken language available from Pearson PLC. Versant tests were the first fully automated tests of spoken language to use advanced speech processing technology to assess the spoken language skills of non-native speakers. The Versant language suite includes tests of English, Spanish, Dutch, French, and Arabic. Versant technology has also been applied to the assessment of Aviation English, children's oral reading assessment, and adult literacy assessment.
TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time.
A non-native speech database is a speech database of non-native pronunciations of English. Such databases are used in the development of: multilingual automatic speech recognition systems, text to speech systems, pronunciation trainers, and second language learning systems.
Fluency refers to continuity, smoothness, rate, and effort in speech production. It is also used to characterize language production, language ability or language proficiency.
The phonology of second languages is different from the phonology of first languages in various ways. The differences are considered to come from general characteristics of second languages, such as slower speech rate, lower proficiency than native speakers, and from the interaction between non-native speakers' first and second languages.
Augmented learning is an on-demand learning technique where the environment adapts to the learner. By providing remediation on-demand, learners can gain greater understanding of a topic while stimulating discovery and learning. Technologies incorporating rich media and interaction have demonstrated the educational potential that scholars, teachers and students are embracing. Instead of focusing on memorization, the learner experiences an adaptive learning experience based upon the current context. The augmented content can be dynamically tailored to the learner's natural environment by displaying text, images, video or even playing audio. This additional information is commonly shown in a pop-up window for computer-based environments.
RWTH ASR is a proprietary speech recognition toolkit.
Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
LEPOR is an automatic language independent machine translation evaluation metric with tunable parameters and reinforced factors.
Julia Hirschberg is an American computer scientist noted for her research on computational linguistics and natural language processing.
The BABEL speech corpus is a corpus of recorded speech materials from five Central and Eastern European languages. Intended for use in speech technology applications, it was funded by a grant from the European Union and completed in 1998. It is distributed by the European Language Resources Association.
Language power (LP) is a measure of the ability to communicate effectively in a given language, specifically one that is not native to the speaker.
Peter John Roach is a British retired phonetician. He taught at the Universities of Leeds and Reading, and is best known for his work on the pronunciation of British English.
An audio deepfake is a product of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding text-to-speech as well as speech translation services.
Whisper is a machine learning model for speech recognition and transcription, created by OpenAI and first released as open-source software in September 2022.
only 16% of the variability in word-level intelligibility can be explained by the presence of obvious mispronunciations.
pronunciation researchers are primarily interested in improving L2 learners' intelligibility and comprehensibility, but they have not yet collected sufficient amounts of representative and reliable data (speech recordings with corresponding annotations and judgments) indicating which errors affect these speech dimensions and which do not. These data are essential to train ASR algorithms to assess L2 learners' intelligibility.
listeners differ considerably in their ability to predict unintelligible words.... Thus, it seems the quality rating is a more desirable... automatic-grading score.(Section 2.2.2.)
we investigated the relationship between pronunciation score / intelligibility and various acoustic measures, and then combined these measures.... As far as we know, the automatic estimation of intelligibility has not yet been studied.
you don't need a perfect accent, grammar, or vocabulary to be understandable. In reality, you just need to be understandable with little effort by listeners.
{{citation}}
: CS1 maint: location missing publisher (link)