# Companding

Last updated

In telecommunication and signal processing, companding (occasionally called compansion) is a method of mitigating the detrimental effects of a channel with limited dynamic range. The name is a portmanteau of the words compressing and expanding, which are the functions of a compander at the transmitting and receiving end respectively. The use of companding allows signals with a large dynamic range to be transmitted over facilities that have a smaller dynamic range capability. Companding is employed in telephony and other audio applications such as professional wireless microphones and analog recording.

## How it works

The dynamic range of a signal is compressed before transmission and is expanded to the original value at the receiver. The electronic circuit that does this is called a compander and works by compressing or expanding the dynamic range of an analog electronic signal such as sound recorded by a microphone. One variety is a triplet of amplifiers: a logarithmic amplifier, followed by a variable-gain linear amplifier and an exponential amplifier. Such a triplet has the property that its output voltage is proportional to the input voltage raised to an adjustable power.

Companded quantization is the combination of three functional building blocks – namely, a (continuous-domain) signal dynamic range compressor, a limited-range uniform quantizer, and a (continuous-domain) signal dynamic range expander that inverts the compressor function. This type of quantization is frequently used in telephony systems. [1] [2]

In practice, companders are designed to operate according to relatively simple dynamic range compressor functions that are designed to be suitable for implementation using simple analog electronic circuits. The two most popular compander functions used for telecommunications are the A-law and μ-law functions.

## Applications

Companding is used in digital telephony systems, compressing before input to an analog-to-digital converter, and then expanding after a digital-to-analog converter. This is equivalent to using a non-linear ADC as in a T-carrier telephone system that implements A-law or μ-law companding. This method is also used in digital file formats for better signal-to-noise ratio (SNR) at lower bit depths. For example, a linearly encoded 16-bit PCM signal can be converted to an 8-bit WAV or AU file while maintaining a decent SNR by compressing before the transition to 8-bit and expanding after conversion back to 16-bit. This is effectively a form of lossy audio data compression.

Professional wireless microphones do this since the dynamic range of the microphone audio signal itself is larger than the dynamic range provided by radio transmission. Companding also reduces the noise and crosstalk levels at the receiver. [3]

Companders are used in concert audio systems and in some noise reduction schemes.

## History

The use of companding in an analog picture transmission system was patented by A. B. Clark of AT&T in 1928 (filed in 1925): [4]

In the transmission of pictures by electric currents, the method which consists in sending currents varied in a non-linear relation to the light values of the successive elements of the picture to be transmitted, and at the receiving end exposing corresponding elements of a sensitive surface to light varied in inverse non-linear relation to the received current.

A. B. Clark patent

In 1942, Clark and his team completed the SIGSALY secure voice transmission system that included the first use of companding in a PCM (digital) system. [5]

In 1953, B. Smith showed that a nonlinear DAC could be complemented by the inverse nonlinearity in a successive-approximation ADC configuration, simplifying the design of digital companding systems. [6]

In 1970, H. Kaneko developed the uniform description of segment (piecewise linear) companding laws that had by then been adopted in digital telephony. [7]

In the 1980s (and '90s), many of the music equipment manufacturers (Roland, Yamaha, Korg) used companding when compressing the library waveform data in their digital synthesizers. Unfortunately exact algorithms are not known, neither if any of the manufacturers ever used the Companding scheme which is described in this article. The only known thing is that manufacturers did use data compression [8] in the mentioned time period and that some people refer to it as "companding" while in reality it might mean something else, for example data compression and expansion. [9] This dates back to the late '80s when memory chips were often one of the most costly components in the instrument. Manufacturers usually quoted the amount of memory in its compressed form: i.e. 24 MB of physical waveform ROM in a Korg Trinity is actually 48 MB when uncompressed. Similarly, Roland SR-JV expansion boards were usually advertised as 8 MB boards with '16 MB-equivalent content'. Careless copying of this technical information, omitting the "equivalence" reference, can often cause confusion.

## Related Research Articles

In signal processing, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

Modulation is used by singers and other vocalists to modify characteristics of their voices, such as loudness or pitch.

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

An A-law algorithm is a standard companding algorithm, used in European 8-bit PCM digital communications systems to optimize, i.e. modify, the dynamic range of an analog signal for digitizing. It is one of two versions of the G.711 standard from ITU-T, the other version being the similar μ-law, used in North America and Japan.

A delta modulation is an analog-to-digital and digital-to-analog signal conversion technique used for transmission of voice information where quality is not of primary importance. DM is the simplest form of differential pulse-code modulation (DPCM) where the difference between successive samples are encoded into n-bit data streams. In delta modulation, the transmitted data are reduced to a 1-bit data stream. Its main features are:

The μ-law algorithm is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of two versions of the G.711 standard from ITU-T, the other version being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.

Telephony is the field of technology involving the development, application, and deployment of telecommunication services for the purpose of electronic transmission of voice, fax, or data, between distant parties. The history of telephony is intimately linked to the invention and development of the telephone.

Digital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s and 1980s, it gradually replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s.

In electronics, a digital-to-analog converter is a system that converts a digital signal into an analog signal. An analog-to-digital converter (ADC) performs the reverse function.

G.711 is a narrowband audio codec originally designed for use in telephony that provides toll-quality audio at 64 kbit/s. G.711 passes audio signals in the range of 300–3400 Hz and samples them at the rate of 8,000 samples per second, with the tolerance on that rate of 50 parts per million (ppm). Non-uniform (logarithmic) quantization with 8 bits is used to represent each sample, resulting in a 64 kbit/s bit rate. There are two slightly different versions: μ-law, which is used primarily in North America and Japan, and A-law, which is in use in most other countries outside North America.

Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.

Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds, thus reducing or compressing an audio signal's dynamic range. Compression is commonly used in sound recording and reproduction, broadcasting, live sound reinforcement and in some instrument amplifiers.

Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound.

dbx is a family of noise reduction systems developed by the company of the same name. The most common implementations are dbx Type I and dbx Type II for analog tape recording and, less commonly, vinyl LPs. A separate implementation, known as dbx-TV, is part of the MTS system used to provide stereo sound to North American and certain other TV systems. The company, dbx, Inc., was also involved with Dynamic Noise Reduction (DNR) systems.

Near Instantaneous Companded Audio Multiplex (NICAM) is an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.

Continuously variable slope delta modulation is a voice coding method. It is a delta modulation with variable step size, first proposed by Greefkes and Riemens in 1970.

In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.

Adaptive differential pulse-code modulation (ADPCM) is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio.

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

## References

1. W. R. Bennett, "Spectra of Quantized Signals", Bell System Technical Journal , Vol. 27, pp. 446–472, July 1948.
2. Robert M. Gray and David L. Neuhoff, "Quantization", IEEE Transactions on Information Theory , Vol. IT-44, No. 6, pp. 2325–2383, Oct. 1998. doi : 10.1109/18.720541
3. USpatent,A. B. Clark,"Electrical picture-transmitting system",issued 1928-11-13, assigned to AT&T
4. Randall K. Nichols and Panos C. Lekkas (2002). . McGraw-Hill Professional. p.  256. ISBN   0-07-138038-8. companding a-b-clark pcm.
5. B. Smith, "Instantaneous Companding of Quantized Signals," Bell System Technical Journal, Vol. 36, May 1957, pp. 653–709.
6. H. Kaneko, "A Unified Formulation of Segment Companding Laws and Synthesis of Codecs and Digital Compandors," Bell System Technical Journal, Vol. 49, September 1970, pp. 1555–1558.
7. Eric Persing, sound designer (Roland, Spectrasonics), 29th May 2010 https://www.gearslutz.com/board/showpost.php?p=5446278&postcount=130
8. Dave Polich, sound designer, 13th January 2018 https://www.gearslutz.com/board/showpost.php?p=13068220&postcount=146