ESpeak

Last updated

eSpeakNG
Original author(s) Jonathan Duddington
Developer(s) Alexander Epaneshnikov et al.
Initial releaseFebruary 2006;18 years ago (2006-02)
Stable release
1.51 [1]   OOjs UI icon edit-ltr-progressive.svg / 2 April 2022;2 years ago (2 April 2022)
Repository github.com/espeak-ng/espeak-ng/
Written in C
Operating system Linux
Windows
macOS
FreeBSD
Type Speech synthesizer
License GPLv3
Website github.com/espeak-ng/espeak-ng/

eSpeak is a free and open-source, cross-platform, compact, software speech synthesizer. It uses a formant synthesis method, providing many languages in a relatively small file size. eSpeakNG (Next Generation) is a continuation of the original developer's project with more feedback from native speakers.

Contents

Because of its small size and many languages, eSpeakNG is included in NVDA [2] open source screen reader for Windows, as well as Android, [3] Ubuntu [4] and other Linux distributions. Its predecessor eSpeak was recommended by Microsoft in 2016 [5] and was used by Google Translate for 27 languages in 2010; [6] 17 of these were subsequently replaced by proprietary voices. [7]

The quality of the language voices varies greatly. In eSpeakNG's predecessor eSpeak, the initial versions of some languages were based on information found on Wikipedia. [8] Some languages have had more work or feedback from native speakers than others. Most of the people who have helped to improve the various languages are blind users of text-to-speech.

History

In 1995, Jonathan Duddington released the Speak speech synthesizer for RISC OS computers supporting British English. [9] On 17 February 2006, Speak 1.05 was released under the GPLv2 license, initially for Linux, with a Windows SAPI 5 version added in January 2007. [10] Development on Speak continued until version 1.14, when it was renamed to eSpeak.

Development of eSpeak continued from 1.16 (there was not a 1.15 release) [10] with the addition of an eSpeakEdit program for editing and building the eSpeak voice data. These were only available as separate source and binary downloads up to eSpeak 1.24. The 1.24.02 version of eSpeak was the first version of eSpeak to be version controlled using subversion, [11] with separate source and binary downloads made available on SourceForge. [10] From eSpeak 1.27, eSpeak was updated to use the GPLv3 license. [11] The last official eSpeak release was 1.48.04 for Windows and Linux, 1.47.06 for RISC OS and 1.45.04 for macOS. [12] The last development release of eSpeak was 1.48.15 on 16 April 2015. [13]

eSpeak uses the Usenet scheme to represent phonemes with ASCII characters. [14]

eSpeak NG

On 25 June 2010, [15] Reece Dunn started a fork of eSpeak on GitHub using the 1.43.46 release. This started off as an effort to make it easier to build eSpeak on Linux and other POSIX platforms.

On 4 October 2015 (6 months after the 1.48.15 release of eSpeak), this fork started diverging more significantly from the original eSpeak. [16] [17]

On 8 December 2015, there were discussions on the eSpeak mailing list about the lack of activity from Jonathan Duddington over the previous 8 months from the last eSpeak development release. This evolved into discussions of continuing development of eSpeak in Jonathan's absence. [18] [19] The result of this was the creation of the espeak-ng (Next Generation) fork, using the GitHub version of eSpeak as the basis for future development.

On 11 December 2015, the espeak-ng fork was started. [20] The first release of espeak-ng was 1.49.0 on 10 September 2016, [21] containing significant code cleanup, bug fixes, and language updates.

Features

eSpeakNG can be used as a command-line program, or as a shared library.

It supports Speech Synthesis Markup Language (SSML).

Language voices are identified by the language's ISO 639-1 code. They can be modified by "voice variants". These are text files which can change characteristics such as pitch range, add effects such as echo, whisper and croaky voice, or make systematic adjustments to formant frequencies to change the sound of the voice. For example, "af" is the Afrikaans voice. "af+f2" is the Afrikaans voice modified with the "f2" voice variant which changes the formants and the pitch range to give a female sound.

eSpeakNG uses an ASCII representation of phoneme names which is loosely based on the Usenet system.

Phonetic representations can be included within text input by including them within double square-brackets. For example: espeak-ng -v en "Hello [[w3:ld]]" will say Hello world in English.

Synthesis method

ESpeakNG intro by eSpeakNG in English

eSpeakNG can be used as text-to-speech translator in different ways, depending on which text-to-speech translation step user want to use.

1. step — text to phoneme translation

There are many languages (notably English) which do not have straightforward one-to-one rules between writing and pronunciation; therefore, the first step in text-to-speech generation has to be text-to-phoneme translation.

  1. input text is translated into pronunciation phonemes (e.g. input text xerox is translated into zi@r0ks for pronunciation).
  2. pronunciation phonemes are synthesized into sound e.g., zi@r0ks is voiced as zi@r0ks in monotone way

To add intonation for speech i.e. prosody data are necessary (e.g. stress of syllable, falling or rising pitch of basic frequency, pause, etc.) and other information, which allows to synthesize more human, non-monotonous speech. E.g. in eSpeakNG format stressed syllable is added using apostrophe: z'i@r0ks which provides more natural speech: z'i@r0ks with intonation

For comparison two samples with and without prosody data:

  1. [[DIs Iz m0noUntoUn spi:tS]] is spelled in monotone way
  2. [[DIs Iz 'Int@n,eItI2d sp'i:tS]] is spelled intonated way

If eSpeakNG is used for generation of prosody data only, then prosody data can be used as input for MBROLA diphone voices.

2. step — sound synthesis from prosody data

The eSpeakNG provides two different types of formant speech synthesis using its two different approaches. With its own eSpeakNG synthesizer and a Klatt synthesizer: [22]

  1. The eSpeakNG synthesizer creates voiced speech sounds such as vowels and sonorant consonants by additive synthesis adding together sine waves to make the total sound. Unvoiced consonants e.g. /s/ are made by playing recorded sounds, [23] because they are rich in harmonics, which makes additive synthesis less effective. Voiced consonants such as /z/ are made by mixing a synthesized voiced sound with a recorded sample of unvoiced sound.
  2. The Klatt synthesizer mostly uses the same formant data as the eSpeakNG synthesizer. But, it also produces sounds by subtractive synthesis by starting with generated noise, which is rich in harmonics, and then applying digital filters and enveloping to filter out necessary frequency spectrum and sound envelope for particular consonant (s, t, k) or sonorant (l, m, n) sound.

For the MBROLA voices, eSpeakNG converts the text to phonemes and associated pitch contours. It passes this to the MBROLA program using the PHO file format, capturing the audio created in output by MBROLA. That audio is then handled by eSpeakNG.

Languages

eSpeakNG performs text-to-speech synthesis for the following languages: [24]

  1. Afrikaans [25]
  2. Albanian [26]
  3. Amharic
  4. Ancient Greek
  5. Arabic 1
  6. Aragonese [27]
  7. Armenian (Eastern Armenian)
  8. Armenian (Western Armenian)
  9. Assamese
  10. Azerbaijani
  11. Bashkir
  12. Basque
  13. Belarusian
  14. Bengali
  15. Bishnupriya Manipuri
  16. Bosnian
  17. Bulgarian [27]
  18. Burmese
  19. Cantonese [27]
  20. Catalan [27]
  21. Cherokee
  22. Chinese (Mandarin)
  23. Croatian [27]
  24. Czech
  25. Chuvash
  26. Danish [27]
  27. Dutch [27]
  28. English (American) [27]
  29. English (British)
  30. English (Caribbean)
  31. English (Lancastrian)
  32. English (New York City)5
  33. English (Received Pronunciation)
  34. English (Scottish)
  35. English (West Midlands)
  36. Esperanto [27]
  37. Estonian [27]
  38. Finnish [27]
  39. French (Belgian) [27]
  40. French (Canada)
  41. French (France)
  42. Georgian [27]
  43. German [27]
  44. Greek (Modern) [27]
  45. Greenlandic
  46. Guarani
  47. Gujarati
  48. Hakka Chinese 3
  49. Haitian Creole
  50. Hawaiian
  51. Hebrew
  52. Hindi [27]
  53. Hungarian [27]
  54. Icelandic [27]
  55. Indonesian [27]
  56. Ido
  57. Interlingua
  58. Irish [27]
  59. Italian [27]
  60. Japanese 4 [28]
  61. Kannada [27]
  62. Kazakh
  63. Klingon
  64. Kʼicheʼ
  65. Konkani [29]
  66. Korean
  67. Kurdish [27]
  68. Kyrgyz
  69. Quechua
  70. Latin
  71. Latgalian
  72. Latvian [27]
  73. Lingua Franca Nova
  74. Lithuanian
  75. Lojban [27]
  76. Luxembourgish
  77. Macedonian
  78. Malay [27]
  79. Malayalam [27]
  80. Maltese
  81. Manipuri
  82. Māori
  83. Marathi [27]
  84. Nahuatl (Classical)
  85. Nepali [27]
  86. Norwegian (Bokmål) [27]
  87. Nogai
  88. Oromo
  89. Papiamento
  90. Persian [27]
  91. Persian (Latin alphabet)2
  92. Polish [27]
  93. Portuguese (Brazilian) [27]
  94. Portuguese (Portugal)
  95. Punjabi [30]
  96. Pyash (a constructed language)
  97. Quenya
  98. Romanian [27]
  99. Russian [27]
  100. Russian (Latvia)
  101. Scottish Gaelic
  102. Serbian [27]
  103. Setswana
  104. Shan (Tai Yai)
  105. Sindarin
  106. Sindhi
  107. Sinhala
  108. Slovak [27]
  109. Slovenian
  110. Spanish (Spain) [27]
  111. Spanish (Latin American)
  112. Swahili [25]
  113. Swedish [27]
  114. Tamil [27]
  115. Tatar
  116. Telugu
  117. Thai
  118. Turkmen
  119. Turkish [27]
  120. Uyghur
  121. Ukrainian
  122. Urarina
  123. Urdu
  124. Uzbek
  125. Vietnamese (Central Vietnamese) [27]
  126. Vietnamese (Northern Vietnamese)
  127. Vietnamese (Southern Vietnamese)
  128. Welsh
  1. Currently, only fully diacritized Arabic is supported.
  2. Persian written using English (Latin) characters.
  3. Currently, only Pha̍k-fa-sṳ is supported.
  4. Currently, only Hiragana and Katakana are supported.
  5. Currently unreleased; it must be built from the latest source code.

See also

Related Research Articles

<span class="mw-page-title-main">International Phonetic Alphabet</span> System of phonetic notation

The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic notation based primarily on the Latin script. It was devised by the International Phonetic Association in the late 19th century as a standard written representation for the sounds of speech. The IPA is used by lexicographers, foreign language students and teachers, linguists, speech–language pathologists, singers, actors, constructed language creators, and translators.

In linguistics, a liquid consonant or simply liquid is any of a class of consonants that consists of rhotics and voiced lateral approximants, which are also sometimes described as "R-like sounds" and "L-like sounds". The word liquid seems to be a calque of the Ancient Greek word ὑγρός, initially used by grammarian Dionysius Thrax to describe Greek sonorants.

Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition.

Linguistics is the scientific study of human language. Someone who engages in this study is called a linguist. See also the Outline of linguistics, the List of phonetics topics, the List of linguists, and the List of cognitive science topics. Articles related to linguistics include:

<span class="mw-page-title-main">Non-native pronunciations of English</span> Overview of English-learners pronunciation

Non-native pronunciations of English result from the common linguistic phenomenon in which non-native speakers of any language tend to transfer the intonation, phonological processes and pronunciation rules of their first language into their English speech. They may also create innovative pronunciations not found in the speaker's native language.

PlainTalk is the collective name for several speech synthesis (MacinTalk) and speech recognition technologies developed by Apple Inc. In 1990, Apple invested a lot of work and money in speech recognition technology, hiring many researchers in the field. The result was "PlainTalk", released with the AV models in the Macintosh Quadra series from 1993. It was made a standard system component in System 7.1.2, and has since been shipped on all PowerPC and some 68k Macintoshes.

<span class="mw-page-title-main">Gunnar Fant</span>

Carl Gunnar Michael Fant was a leading researcher in speech science in general and speech synthesis in particular who spent most of his career as a professor at the Swedish Royal Institute of Technology (KTH) in Stockholm. He was a first cousin of the actors and directors George Fant and Kenne Fant.

In linguistics, prosody is the study of elements of speech that are not individual phonetic segments but which are properties of syllables and larger units of speech, including linguistic functions such as intonation, stress, and rhythm. Such elements are known as suprasegmentals.

The phonology of Vietnamese features 19 consonant phonemes, with 5 additional consonant phonemes used in Vietnamese's Southern dialect, and 4 exclusive to the Northern dialect. Vietnamese also has 14 vowel nuclei, and 6 tones that are integral to the interpretation of the language. Older interpretations of Vietnamese tones differentiated between "sharp" and "heavy" entering and departing tones. This article is a technical description of the sound system of the Vietnamese language, including phonetics and phonology. Two main varieties of Vietnamese, Hanoi and Saigon, which are slightly different to each other, are described below.

Speech Synthesis Markup Language (SSML) is an XML-based markup language for speech synthesis applications. It is a recommendation of the W3C's Voice Browser Working Group. SSML is often embedded in VoiceXML scripts to drive interactive telephony systems. However, it also may be used alone, such as for creating audio books. For desktop applications, other markup languages are popular, including Apple's embedded speech commands, and Microsoft's SAPI Text to speech (TTS) markup, also an XML language. It is also used to produce sounds via Azure Cognitive Services' Text to Speech API or when writing third-party skills for Google Assistant or Amazon Alexa.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

<span class="mw-page-title-main">DECtalk</span> Speech synthesizer and text-to-speech technology

DECtalk was a speech synthesizer and text-to-speech technology developed by Digital Equipment Corporation in 1983, based largely on the work of Dennis Klatt at MIT, whose source-filter algorithm was variously known as KlattTalk or MITalk.

Chinese speech synthesis is the application of speech synthesis to the Chinese language. It poses additional difficulties due to Chinese characters frequently having different pronunciations in different contexts and the complex prosody, which is essential to convey the meaning of words, and sometimes the difficulty in obtaining agreement among native speakers concerning what the correct pronunciation is of certain phonemes.

MBROLA is speech synthesis software as a worldwide collaborative project. The MBROLA project web page provides diphone databases for many spoken languages.

The Java Speech API (JSAPI) is an application programming interface for cross-platform support of command and control recognizers, dictation systems, and speech synthesizers. Although JSAPI defines an interface only, there are several implementations created by third parties, for example FreeTTS.

The CMU Pronouncing Dictionary is an open-source pronouncing dictionary originally created by the Speech Group at Carnegie Mellon University (CMU) for use in speech recognition research.

<span class="mw-page-title-main">Utau</span> Japanese shareware voice synthesizer

UTAU is a Japanese singing synthesizer application created by Ameya/Ayame (飴屋/菖蒲). This program is similar to the VOCALOID software, with the difference being it is shareware instead of under a third party licensing.

<span class="mw-page-title-main">Cantor (music software)</span> Vocal synthesizer plugin

Cantor was a vocal singing synthesizer software released four months after the original release of Vocaloid by the company VirSyn, and was based on the same idea of synthesizing the human voice. VirSyn released English and German versions of this software. Cantor 2 boasted a variety of voices from near-realistic sounding ones to highly expressive vocals and robotic voices.

<span class="mw-page-title-main">Speech Recognition & Synthesis</span> Screen reader application by Google

Speech Recognition & Synthesis, formerly known as Speech Services, is a screen reader application developed by Google for its Android operating system. It powers applications to read aloud (speak) the text on the screen, with support for many languages. Text-to-Speech may be used by apps such as Google Play Books for reading books aloud, Google Translate for reading aloud translations for the pronunciation of words, Google TalkBack, and other spoken feedback accessibility-based applications, as well as by third-party apps. Users must install voice data for each language.

<span class="mw-page-title-main">Dennis H. Klatt</span> American researcher in speech and hearing science

Dennis H. Klatt was an American researcher in speech and hearing science. Klatt was the pioneer of computerized speech synthesis and created an interface which allowed for speech for non-expert users for the first time. Prior to his work, non-verbal individuals would need specialist support to be able to speak at all. Stephen Hawking used a version of this speech synthesizer, based on Klatt's own voice, and which Hawking chose to keep even after others became available.

References

  1. "Release 1.51".
  2. "Switch to eSpeak NG in NVDA distribution · Issue #5651 · nvaccess/nvda". GitHub.
  3. "eSpeak TTS for Android".
  4. "espeak-ng package : Ubuntu". Launchpad. 21 December 2023.
  5. "Download voices for Immersive Reader, Read Mode, and Read Aloud".
  6. Google blog, Giving a voice to more languages on Google Translate, May 2010
  7. Google blog, Listen to us now, December 2010.
  8. "eSpeak Speech Synthesizer". espeak.sourceforge.net.
  9. "eSpeak: Speech Synthesizer". espeak.sourceforge.net.
  10. 1 2 3 "ESpeak: Speech synthesis - Browse /Espeak at SourceForge.net".
  11. 1 2 "eSpeak: speech synthesis / Code / Browse Commits". sourceforge.net.
  12. "Espeak: Downloads".
  13. http://espeak.sourceforge.net/test/latest.html [ bare URL ]
  14. van Leussen, Jan-Wilem; Tromp, Maarten (26 July 2007). "Latin to Speech". p. 6. CiteSeerX   10.1.1.396.7811 .
  15. "Build: Allow portaudio 18 and 19 to be switched easily. · rhdunn/Espeak@63daaec". GitHub .
  16. "Espeakedit: Fix argument processing for unicode argv types · rhdunn/Espeak@61522a1". GitHub .
  17. "Switch to eSpeak NG in NVDA distribution · Issue #5651 · nvaccess/Nvda". GitHub .
  18. "[Espeak-general] Taking ownership of the espeak project and its future | eSpeak: speech synthesis". sourceforge.net.
  19. "[Espeak-general] Vote for new main espeak developer | eSpeak: speech synthesis". sourceforge.net.
  20. Rebrand the espeak program to espeak-ng.
  21. "Release 1.49.0 · espeak-ng/espeak-ng". GitHub.
  22. Klatt, Dennis H. (1979). "Software for a cascade/parallel formant synthesizer" (PDF). J. Acoustical Society of America, 67(3) March 1980.
  23. "espeak-ng". GitHub.
  24. "ESpeak NG Text-to-Speech". GitHub . 13 February 2022.
  25. 1 2 Butgereit, L., & Botha, A. (2009, May). Hadeda: The noisy way to practice spelling vocabulary using a cell phone. In The IST-Africa 2009 Conference, Kampala, Uganda.
  26. Hamiti, M., & Kastrati, R. (2014). Adapting eSpeak for converting text into speech in Albanian. International Journal of Computer Science Issues (IJCSI), 11(4), 21.
  27. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 Kayte, S., & Gawali, D. B. (2015). Marathi Speech Synthesis: A review. International Journal on Recent and Innovation Trends in Computing and Communication, 3(6), 3708-3711.
  28. Pronk, R. (2013). Adding Japanese language synthesis support to the eSpeak system . University of Amsterdam.
  29. Mohanan, S., Salkar, S., Naik, G., Dessai, N. F., & Naik, S. (2012). Text Reader for Konkani Language. Automation and Autonomous System, 4(8), 409-414.
  30. Kaur, R., & Sharma, D. (2016). An Improved System for Converting Text into Speech for Punjabi Language using eSpeak. International Research Journal of Engineering and Technology, 3(4), 500-504.