Parallel compression

Last updated

Parallel compression, also known as New York compression, is a dynamic range compression technique used in sound recording and mixing. Parallel compression, a form of upward compression, is achieved by mixing an unprocessed 'dry', or lightly compressed signal with a heavily compressed version of the same signal. Rather than lowering the highest peaks for the purpose of dynamic range reduction, it decreases the dynamic range by raising up the softest sounds, adding audible detail. [1] It is most often used on stereo percussion buses in recording and mixdown, on electric bass, and on vocals in recording mixes and live concert mixes. [2]

Contents

History

The internal circuitry of Dolby A noise reduction, introduced in 1965, contained parallel buses with compression on one of them, the two mixed in a flexible ratio. [2] In October 1977, an article by Mike Beville was published in Studio Sound magazine describing the technique as applied to classical recordings. [3] Many citations of this article claim that Beville called it "side-chain" compression, most likely due to a misquoting of a citation of the article in Roey Izhaki's book, Mixing Audio: Concepts, Practices and Tools. [2] However, Beville used the term "side-chain" to describe the internal electronics and signal flow of compressors, not to describe a technique for using compressors. His discussion of parallel compression technique occurs in a separate section at the end of the article where he outlines how to place a limiter-compressor "in parallel with the direct signal" to obtain effective compression at low input levels. As Izhaki mentions in his book, others have referred to the technique as "side-chain" compression, which has made for confusion with the side-chain compression technique which uses an external "key" or "side chain" signal to determine compression on a target signal.

Beville's article, entitled "Compressors and Limiters," was reprinted in the same magazine in June 1988. [4] A follow-up article by Richard Hulse in the April 1996 Studio Sound included application tips and a description of implementing the technique in a digital audio workstation. [5] Bob Katz coined the term "parallel compression", [2] and has described it as an implementation of "upward compression", the increase in audibility of softer passages. [4] Studio engineers in New York City became known for reliance on the technique, and it picked up the name "New York compression". [2]

Use

The human ear is sensitive to loud sounds being suddenly reduced in volume, but less so to soft sounds being increased in volume—parallel compression takes advantage of this difference. [2] [4] Unlike normal limiting and downward compression, fast transients in music are retained in parallel compression, preserving the "feel" and immediacy of a live performance. Because the method is less audible to the human ear, the compressor can be set aggressively, with high ratios for strong effect. [2]

In an audio mix using an analog mixing console and analog compressors, parallel compression is achieved by sending a monophonic or stereo signal in two or more directions and then summing the multiple pathways, mixing them together by ear to achieve the desired effect. One pathway is straight to the summing mixer, while other pathways go through mono or stereo compressors, set aggressively for high-ratio gain reduction. The compressed signals are brought back to the summing mixer and blended in with the straight signal. [2]

If digital components are being used, latency must be taken into account. If the normal analog method is used for a digital compressor, the signals traveling through the parallel pathways will arrive at the summing mixer at slightly different times, creating unpleasant comb-filtering and phasing effects. The digital compressor pathway takes a little more time to process the sound—on the order of 0.3 to 3 milliseconds longer. Instead, the two pathways must both have the same number of processing stages: the "straight" pathway is assigned a compression stage which is not given an aggressively high ratio. In this case, the two signals both go through compression stages, and both pathways are delayed the same amount of time, but one is set to do no dynamic range compression, or to do very little, and the other is set for high amounts of gain reduction. [6]

The method can be used artistically to "fatten" or "beef up" a mix, by careful setting of attack and release times on the compressor. [4] These settings may be adjusted further until the compressor causes the signal to "pump" or "breathe" in tempo with the song, adding its own character to the sound. Unusually extreme implementations have been achieved by studio mix engineers such as New York-based Michael Brauer who uses five parallel compressors, adjusted individually for timbral and tonal variations, mixed and blended to taste, to achieve his target sound on vocals for the Rolling Stones, Aerosmith, Bob Dylan, KT Tunstall and Coldplay. [7] Mix engineer Anthony "Rollmottle" Puglisi uses parallel compression applied conservatively across the entire mix, especially in dance-oriented electronic music: "it gives a track that extra oomph and power (not just make it louder—there's a difference) through quieter portions of the jam without resorting to one of those horrific 'maximizer' plugins that squeeze the dynamics right out of your song." [8] While parallel compression is widely utilized in electronic dance music, "side-chain" compression is the technique popularly used to give a synth lead or other melodic element the pulsating quality ubiquitous in the genre. One or more tracks may be side-chained to the kick, thereby compressing them only when the beat occurs.

Related Research Articles

In signal processing, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

Effects unit Electronic device that alters audio

An effects unit or effects pedal is an electronic device that alters the sound of a musical instrument or other audio source through audio signal processing.

Lossless compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates.

Compression may refer to:

Companding Method of mitigating the detrimental effects of a channel with limited dynamic range

In telecommunication and signal processing, companding is a method of mitigating the detrimental effects of a channel with limited dynamic range. The name is a portmanteau of the words compressing and expanding, which are the functions of a compander at the transmitting and receiving end respectively. The use of companding allows signals with a large dynamic range to be transmitted over facilities that have a smaller dynamic range capability. Companding is employed in telephony and other audio applications such as professional wireless microphones and analog recording.

μ-law algorithm

The μ-law algorithm is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of two versions of the G.711 standard from ITU-T, the other version being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.

Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.

Mixing console Device used for audio mixing for recording or performance

In sound recording and reproduction, and sound reinforcement systems, a mixing console is an electronic device for combining sounds of many different audio signals. Inputs to the console include microphones being used by singers and for picking up acoustic instruments, signals from electric or electronic instruments, or recorded music. Depending on the type, a mixer is able to control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.

Dynamic range compression Audio signal processing operation

Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds, thus reducing or compressing an audio signal's dynamic range. Compression is commonly used in sound recording and reproduction, broadcasting, live sound reinforcement and in some instrument amplifiers.

dbx (noise reduction)

dbx is a family of noise reduction systems developed by the company of the same name. The most common implementations are dbx Type I and dbx Type II for analog tape recording and, less commonly, vinyl LPs. A separate implementation, known as dbx-TV, is part of the MTS system used to provide stereo sound to North American and certain other TV systems. The company, dbx, Inc., was also involved with Dynamic Noise Reduction (DNR) systems.

Sound reinforcement system Amplification sound system for public events

A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.

Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.

Gain compression Reduction in differential or slope gain

Gain compression is a reduction in differential or slope gain caused by nonlinearity of the transfer function of the amplifying device. This nonlinearity may be caused by heat due to power dissipation or by overdriving the active device beyond its linear region. It is a large-signal phenomenon of circuits.

In audio engineering, ducking is an audio effect commonly used in radio and pop music, especially dance music. In ducking, the level of one audio signal is reduced by the presence of another signal. In radio this can typically be achieved by lowering (ducking) the volume of a secondary audio track when the primary track starts, and lifting the volume again when the primary track is finished. A typical use of this effect in a daily radio production routine is for creating a voice-over: a foreign language original sound is dubbed by a professional speaker reading the translation. Ducking becomes active as soon as the translation starts.

The term microphone preamplifier can either refer to the electronic circuitry within a microphone, or to a separate device or circuit that the microphone is connected to. In either instance, the purpose of the microphone preamplifier is the same.

De-essing is any technique intended to reduce or eliminate the excessive prominence of sibilant consonants, such as the sounds normally represented in English by "s", "z", "ch", "j" and "sh", in recordings of the human voice. Sibilance lies in frequencies anywhere between 2–10 kHz, depending on the individual voice. This article discusses the cause of the problem of excessive recorded sibilance and several approaches to the process of fixing it.

Audio mixing (recorded music) Audio mixing to yield recorded sound

In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo field are adjusted and balanced. Audio mixing techniques and approaches vary widely and have a significant influence on the final product.

A mixing engineer is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition, whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or foley in a film, and more.

In audio, recording, and music, pumping or gain pumping is a creative misuse of compression, the "audible unnatural level changes associated primarily with the release of a compressor." There is no correct way to produce pumping, and according to Alex Case, the effect may result from selecting "too slow or too fast...or too, um, medium" attack and release settings.

In audio engineering, a bus is a signal path which can be used to combine (sum) individual audio signal paths together. It is used typically to group several individual audio tracks which can be then manipulated, as a group, like another track. This can be achieved by routing the signal physically by ways of switches and cable patches on a mixing console, or by manipulating software features on a digital audio workstation (DAW).

References

  1. Thomas, Nick (February 8, 2009). Guide to Mixing. p. 39.
  2. 1 2 3 4 5 6 7 8 Izhaki, Roey (2008). Mixing audio: concepts, practices and tools. Focal Press. p. 322. ISBN   978-0-240-52068-1 . Retrieved March 19, 2010.
  3. Beville, Mike (October 1977). "Compressors and limiters: their uses and abuses" (PDF). Studio Sound. 19: 28–32 via AmericanRadioHistory.com.
  4. 1 2 3 4 Katz, Bob; Robert A. Katz (2002). Mastering audio: the art and the science. Focal Press. pp. 133–138. ISBN   0-240-80545-3.
  5. Hulse, Richard (July 2017) [1996]. "Master Class: A different way of looking at compression". Richard Hulse (reprint). Retrieved 2017-07-04.
  6. Bregitzer, Lorne (2008). Secrets of Recording: Professional Tips, Tools & Techniques. Focal Press. pp. 193–194. ISBN   978-0-240-81127-7.
  7. Senior, Mike (April 2009). "Cubase: Advanced Vocal Compression". Sound on Sound. SOS Publications Group. Retrieved March 19, 2010.
  8. Hirsch, Scott; Steve Heithecker (2006). Pro Tools 7 Session Secrets: Professional Recipes for High-Octane Results. John Wiley and Sons. p. 155. ISBN   0-471-93398-8.