Software effect processor

Last updated
top: Software instruments, and
bottom: Software effect processors
on Cubase 6 (CC-BY-SA-3.0 image) Cubase 6 feature - software instruments and software effects.svg
top: Software instruments, and
bottom: Software effect processors
on Cubase 6 (CC-BY-SA-3.0 image)

A software effect processor is a computer program which is able to modify the signal coming from a digital audio source in real time.

Digital signal (signal processing)

In the context of digital signal processing (DSP), a digital signal is a discrete-time signal for which not only the time but also the amplitude has discrete values; in other words, its samples take on only values from a discrete set. If that discrete set is finite, the discrete values can be represented with digital words of a finite width. Most commonly, these discrete values are represented as fixed-point words or floating-point words.

Digital audio technology that records, stores, and reproduces sound

Digital audio is sound that has been recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is encoded as numerical samples in continuous sequence. For example, in CD audio, samples are taken 44100 times per second each with 16 bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s, it gradually replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s.

An audio signal is a representation of sound, typically using a level of electrical voltage for analog signals, and a series of binary numbers for digital signals. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz, which corresponds to the upper and lower limits of human hearing. Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal back into sound.

Contents

Principle of operation

The digital audio signal, whose origin may be analog (by conversion to digital) or be in an already digital source (such as an audio file, or a software synthesizer), is stored in temporary allotments of computer memory called buffers. Once there, the software effect processor modifies the signal according to a specific algorithm, which creates the desired effect. After this operation, the signal may be transformed from digital to analog and sent to an audible output, stored in digital form for later reproduction or editing, or sent to other software effect processors for additional processing.

A software synthesizer, also known as a softsynth, or software instrument is a computer program, or plug-in that generates digital audio, usually for music. Computer software that can create sounds or music is not new, but advances in processing speed now allow softsynths to accomplish the same tasks that previously required the dedicated hardware of a conventional synthesizer. Softsynths are usually cheaper and more portable than dedicated hardware, and easier to interface with other music software such as music sequencers.

In computer science, a data buffer is a region of a physical memory storage used to temporarily store data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device or just before it is sent to an output device. However, a buffer may be used when moving data between processes within a computer. This is comparable to buffers in telecommunication. Buffers can be implemented in a fixed memory location in hardware—or by using a virtual data buffer in software, pointing at a location in the physical memory. In all cases, the data stored in a data buffer are stored on a physical storage medium. A majority of buffers are implemented in software, which typically use the faster RAM to store temporary data, due to the much faster access time compared with hard disk drives. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler or in online video streaming.

Latency

The larger the buffer is, the more time it takes to fill it by digital audio data. Large buffers increase the time required for processing audio in computer, this delay is usually called latency. Every system has certain limitations - too small buffers involving negligible latencies cannot be smoothly processed by computer, so the reasonable size starts at about 32 samples. The processor load does not affect latency directly (it means, once you set certain buffer size, the latency is constant), but with very high processor loads the processing starts dropping out. Increasing buffer size or quitting other application helps to keep playback smooth.

Drivers

Microsoft Windows

The default Windows drivers are not optimized for low latency effect processing. As a solution, Audio Stream Input/Output (ASIO) was created. ASIO is supported by most professional music applications. Most sound cards directed at this market support ASIO. If the hardware manufacturer doesn't provide ASIO drivers, there is a universal ASIO driver named ASIO4ALL, which can be used for any audio interface. ASIO drivers can be emulated, in this case the driver name is ASIO Multimedia. However, the latency when using these drivers is very high.

Audio Stream Input/Output

Audio Stream Input/Output (ASIO) is a computer sound card driver protocol for digital audio specified by Steinberg, providing a low-latency and high fidelity interface between a software application and a computer's sound card. Whereas Microsoft's DirectSound is commonly used as an intermediary signal path for non-professional users, ASIO allows musicians and sound engineers to access external hardware directly.

Apple Mac OS X

All the Mac compatible hardware uses CoreAudio drivers, so the software effects processors can work with small latency and good performance.

See also

Related Research Articles

Latency is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed. Latency is physically a consequence of the limited velocity with which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system will experience some sort of latency, regardless of the nature of stimulation that it has been exposed to.

Microcontroller small computer on a single integrated circuit

A microcontroller is a small computer on a single integrated circuit. In modern terminology, it is similar to, but less sophisticated than, a system on a chip (SoC); an SoC may include a microcontroller as one of its components. A microcontroller contains one or more CPUs along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips.

Sound card internal computer expansion card that facilitates the input and output of audio signals

A sound card is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term sound card is also applied to external audio interfaces used for professional audio applications.

In electronics and telecommunications, jitter is the deviation from true periodicity of a presumably periodic signal, often in relation to a reference clock signal. In clock recovery applications it is called timing jitter. Jitter is a significant, and usually undesired, factor in the design of almost all communications links.

Framebuffer portion of RAM containing a bitmap that drives a video display

A framebuffer is a portion of RAM containing a bitmap that drives a video display. It is a memory buffer containing a complete frame of data. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.

Digital signal processor specialized microprocessor

A digital signal processor (DSP) is a specialized microprocessor, with its architecture optimized for the operational needs of digital signal processing.

Mixing console electronic device for combining sounds of many different audio signals

In sound recording and reproduction, and sound reinforcement systems, a mixing console is an electronic device for combining sounds of many different audio signals. Inputs to the console include microphones being used by singers and for picking up acoustic instruments, signals from electric or electronic instruments, or recorded music. Depending on the type, a mixer is able to control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.

Virtual Studio Technology (VST) is an audio plug-in software interface that integrates software synthesizer and effects in digital audio workstations. VST and similar technologies use digital signal processing to simulate traditional recording studio hardware in software. Thousands of plugins exist, both commercial and freeware, and a large number of audio applications support VST under license from its creator, Steinberg.

Digital audio workstation electronic system designed primarily for editing digital audio

A digital audio workstation (DAW) is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.

Audio editing software computer application for manipulating digital audio

Audio editing software is software which allows editing and generating of audio data. Audio editing software can be implemented completely or partly as library, as computer application, as Web application or as a loadable kernel module. Wave Editors are digital audio editors and there are many sources of software available to perform this function. Most can edit music, apply effects and filters, adjust stereo channels etc.

DirectSound is a deprecated software component of the Microsoft DirectX library for the Windows operating system. DirectSound provides a low-latency interface to sound card drivers written for Windows 95 through Windows XP and can handle the mixing and recording of multiple audio streams.

Digital mixing console

In professional audio, a digital mixing console (DMC) is an electronic device used to combine, route, and change the dynamics, equalization and other properties of multiple audio input signals, using digital computers rather than analog circuitry. The digital audio samples, which is the internal representation of the analog inputs, are summed to what is known as a master channel to produce a combined output. A professional digital mixing console is a dedicated desk or control surface produced exclusively for the task, and is typically more robust in terms of user control, processing power and quality of audio effects. However, a computer with proper controller hardware can act as the device for the digital mixing console since it can mimic its interface, input and output.

Video overlay is any technique used to display a video window on a computer display while bypassing the chain of CPU to graphics card to computer monitor. This is done in order to speed up the video display, and it is commonly used, for example, by TV tuner cards and early 3D graphics accelerator cards. The term is also used to describe the annotation or inclusion of interactivity on online videos, such as overlay advertising.

Latency refers to a short period of delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in the transmission medium.

Display lag is a phenomenon associated with some types of liquid crystal displays (LCDs) like smartphones and computers, and nearly all types of high-definition televisions (HDTVs). It refers to latency, or lag measured by the difference between the time there is a signal input, and the time it takes the input to display on the screen. This lag time has been measured as high as 68 ms, or the equivalent of 3-4 frames on a 60 Hz display. Display lag is not to be confused with pixel response time. Currently the majority of manufacturers do not include any specification or information about display latency on the screens they produce.

This is a glossary of terms relating to computer hardware – physical computer hardware, architectural issues, and peripherals.

SoundGrid is a networking and processing platform for real-time professional audio applications. It is a product of Waves Audio.