Retrieval-based Voice Conversion

Last updated
Retrieval-based Voice Conversion
Developer(s) RVC-Project team
Initial release2023 (2023)
Repository Github
Written in Python
Operating system Windows, Linux, macOS
Available in English, Simplified Chinese, Japanese, Korean, French, Turkish, Portuguese
Type Voice conversion software
License MIT License

Retrieval-based Voice Conversion (RVC) is an open source voice conversion AI algorithm that enables realistic speech-to-speech transformations, accurately preserving the intonation and audio characteristics of the original speaker. [1]

Contents

Overview

In contrast to text-to-speech systems such as ElevenLabs, RVC differs by providing speech-to-speech outputs instead. It maintains the modulation, timbre and vocal attributes of the original speaker, making it suitable for applications where emotional tone is crucial.

The algorithm enables both pre-processed and real-time voice conversion with low latency. This real-time capability marks a significant advancement over previous AI voice conversion technologies, such as So-vits SVC. Its speed and accuracy have led many to note that its generated voices sound near-indistinguishable from "real life", provided that sufficient computational specifications and resources (e.g., a powerful GPU and ample RAM) are available when running it locally and that a high-quality voice model is used. [2] [3] [4]

Applications and concerns

The technology enables voice changing and mimicry, allowing users to create accurate models of others using only a negligible amount of minutes of clear audio samples. These voice models can be saved as .pth (PyTorch) files. While this capability facilitates numerous creative applications, it has also raised concerns about potential misuse as deepfake software for identity theft and malicious impersonation through voice calls.

In pop culture

RVC inference has been used to create realistic depictions of song covers, such as replacing original vocals with characters like Twilight Sparkle and Mordecai to have them sing duets of popular music like "Airplanes" and "Somebody That I Used to Know." These AI-generated covers, which can sound strikingly similar to the voice imitated, have gained popularity on platforms like YouTube as humorous memes. [5]

Related Research Articles

Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.

Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition.

Business process automation (BPA), also known as business automation, refers to the technology-enabled automation of business processes.

In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in 1948, first seen in the game Nim. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation. One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the game Mike Tyson's Punch-Out!! (1987).

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

<span class="mw-page-title-main">Virtual assistant</span> Software agent

A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to streamline task execution. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.

Music and artificial intelligence is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

Amazon Echo, often shortened to Echo, is a brand of smart speakers developed by Amazon. Echo devices connect to the voice-controlled intelligent personal assistant service Alexa, which will respond when a user says "Alexa". Users may change this wake word to "Amazon", "Echo", "Computer", and other options. The features of the device include voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, and playing audiobooks, in addition to providing weather, traffic and other real-time information. It can also control several smart devices, acting as a home automation hub.

Adobe VoCo is an unreleased audio editing and generating prototype software by Adobe that enables novel editing and generation of audio. Dubbed "Photoshop-for-voice", it was first previewed at the Adobe MAX event in November 2016. The technology shown at Adobe MAX was a preview that could potentially be incorporated into Adobe Creative Cloud. It was later revealed that Voco was never meant to be released and was meant to be a research prototype.

WaveNet is a deep neural network for generating raw audio. It was created by researchers at London-based AI firm DeepMind. The technique, outlined in a paper in September 2016, is able to generate relatively realistic-sounding human-like voices by directly modelling waveforms using a neural network method trained with recordings of real speech. Tests with US English and Mandarin reportedly showed that the system outperforms Google's best existing text-to-speech (TTS) systems, although as of 2016 its text-to-speech synthesis still was less convincing than actual human speech. WaveNet's ability to generate raw waveforms means that it can model any kind of audio, including music.

<span class="mw-page-title-main">Deepfake</span> Realistic artificially generated media

Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.

<span class="mw-page-title-main">Voice computing</span> Discipline in computing

Voice computing is the discipline that develops hardware or software to process voice inputs.

<span class="mw-page-title-main">Otter.ai</span> Transcription software company

Otter.ai, Inc. is an American transcription software company based in Mountain View, California. The company develops speech to text transcription applications using artificial intelligence and machine learning. Its software, called Otter, shows captions for live speakers, and generates written transcriptions of speech.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Audio deepfake technology, also referred to as voice cloning or deepfake audio, is an application of artificial intelligence designed to generate speech that convincingly mimics specific individuals, often synthesizing phrases or sentences they have never spoken. Initially developed with the intent to enhance various aspects of human life, it has practical applications such as generating audiobooks and assisting individuals who have lost their voices due to medical conditions. Additionally, it has commercial uses, including the creation of personalized digital assistants, natural-sounding text-to-speech systems, and advanced speech translation services.

Deep learning speech synthesis refers to the application of deep learning models to generate natural-sounding human speech from written text (text-to-speech) or spectrum (vocoder). Deep neural networks are trained using large amounts of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.

ElevenLabs is a software company that specializes in developing natural-sounding speech synthesis software using deep learning.

Whisper is a machine learning model for speech recognition and transcription, created by OpenAI and first released as open-source software in September 2022.

Digital signal processing and machine learning are two technologies that are often combined.

References

  1. Cochard, David (January 7, 2024). "RVC: An AI-Powered Voice Changer". Medium.
  2. "What's RVC". AI Hub. Retrieved 2024-05-27.
  3. Masuda, Naotake (September 21, 2023). "State-of-the-art Singing Voice Conversion methods". Medium.
  4. "Understanding RVC - Retrieval-based Voice Conversion" . Retrieved 2024-10-23.
  5. "RVC WebUI How To – Make AI Song Covers in Minutes! (Voice Conversion Guide) - Tech Tactician". Tech Tactician. 2023-07-06. Retrieved 2024-05-27.