Digital audio

Last updated
Audio levels display on a digital audio recorder (Zoom H4n) Zoom H4n audio recording levels.jpg
Audio levels display on a digital audio recorder (Zoom H4n)

Digital audio is sound that has been recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is encoded as numerical samples in continuous sequence. For example, in CD audio, samples are taken 44100 times per second each with 16 bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s, it gradually replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s.

Sound mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing; pressure wave, generated by vibrating structure

In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid.

Compact Disc Digital Audio Audio data format used on the compact disc

Compact Disc Digital Audio, also known as Audio CD, is the standard format for audio compact discs. The standard is defined in the Red Book, one of a series of "Rainbow Books" that contain the technical specifications for all CD formats.

Hertz SI unit for frequency

The hertz (symbol: Hz) is the derived unit of frequency in the International System of Units (SI) and is defined as one cycle per second. It is named after Heinrich Rudolf Hertz, the first person to provide conclusive proof of the existence of electromagnetic waves. Hertz are commonly expressed in multiples: kilohertz (103 Hz, kHz), megahertz (106 Hz, MHz), gigahertz (109 Hz, GHz), terahertz (1012 Hz, THz), petahertz (1015 Hz, PHz), exahertz (1018 Hz, EHz), and zettahertz (1021 Hz, ZHz).

Contents

In a digital audio system, an analog electrical signal representing the sound is converted with an analog-to-digital converter (ADC) into a digital signal, typically using pulse-code modulation. This digital signal can then be recorded, edited, modified, and copied using computers, audio playback machines, and other digital tools. When the sound engineer wishes to listen to the recording on headphones or loudspeakers (or when a consumer wishes to listen to a digital sound file), a digital-to-analog converter (DAC) performs the reverse process, converting a digital signal back into an analog signal, which is then sent through an audio power amplifier and ultimately to a loudspeaker.

An analog signal is any continuous signal for which the time-varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves. It differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, human speech, and other systems may also convey or be considered analog signals.

Analog-to-digital converter system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal; device converting a physical quantity to a digital number

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

Digital audio systems may include compression, storage, processing, and transmission components. Conversion to a digital format allows convenient manipulation, storage, transmission, and retrieval of an audio signal. Unlike analog audio, in which making copies of a recording results in generation loss and degradation of signal quality, digital audio allows an infinite number of copies to be made without any degradation of signal quality.

Computer data storage Means of storing data readable by computers

Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency.

Data transmission is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.

Overview

A sound wave, in red, represented digitally, in blue (after sampling and 4-bit quantization). 4-bit-linear-PCM.svg
A sound wave, in red, represented digitally, in blue (after sampling and 4-bit quantization).

Digital audio technologies are used in the recording, manipulation, mass-production, and distribution of sound, including recordings of songs, instrumental pieces, podcasts, sound effects, and other sounds. Modern online music distribution depends on digital recording and data compression. The availability of music as data files, rather than as physical objects, has significantly reduced the costs of distribution. [1] Before digital audio, the music industry distributed and sold music by selling physical copies in the form of records and cassette tapes. With digital-audio and online distribution systems such as iTunes, companies sell digital sound files to consumers, which the consumer receives over the Internet.

Song Composition for voice(s)

A song is a musical composition intended to be sung by the human voice. This is often done at distinct and fixed pitches using patterns of sound and silence. Songs contain various forms, such as those including the repetition of sections. Through semantic widening, a broader sense of the word "song" may refer to instrumentals.

Podcast Type of digital media

A podcast is an episodic series of digital audio or video files which a user can download in order to listen. Alternatively, the word "podcast" may refer to the individual component of such a series or to an individual media file.

Music download digital transfer of music from an Internet-facing computer or website to a users local desktop computer

A music download is the digital transfer of music via the Internet into a device capable of decoding and playing it, such as a home computer, MP3 player or smartphone. This term encompasses both legal downloads and downloads of copyrighted material without permission or legal payment. According to a Nielsen report, downloadable music accounted for 55.9% of all music sales in the US in 2012. By the beginning of 2011, Apple's iTunes Store alone made US$1.1 billion of revenue in the first quarter of its fiscal year.

An analog audio system converts physical waveforms of sound into electrical representations of those waveforms by use of a transducer, such as a microphone. The sounds are then stored on an analog medium such as magnetic tape, or transmitted through an analog medium such as a telephone line or radio. The process is reversed for reproduction: the electrical audio signal is amplified and then converted back into physical waveforms via a loudspeaker. Analog audio retains its fundamental wave-like characteristics throughout its storage, transformation, duplication, and amplification.

A transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another.

Microphone Device that converts sound into an electrical signal

A microphone, colloquially named mic or mike, is a device – a transducer – that converts sound into an electrical signal. Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, radio and television broadcasting, and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes such as ultrasonic sensors or knock sensors.

Magnetic tape medium for magnetic recording

Magnetic tape is a medium for magnetic recording, made of a thin, magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany in 1928, based on magnetic wire recording. Devices that record and play back audio and video using magnetic tape are tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape is known as a tape drive.

Analog audio signals are susceptible to noise and distortion, due to the innate characteristics of electronic circuits and associated devices. Disturbances in a digital system do not result in error unless the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturb the sequence of symbols. It is therefore generally possible to have an entirely error-free digital audio system in which no noise or distortion is introduced between conversion to digital format, and conversion back to analog.

A digital audio signal may optionally be encoded for correction of any errors that might occur in the storage or transmission of the signal. This technique, known as channel coding, is essential for broadcast or recorded digital systems to maintain bit accuracy. Eight-to-fourteen modulation is a channel code used in the audio compact disc (CD).

Conversion process

The lifecycle of sound from its source, through an ADC, digital processing, a DAC, and finally as sound again. A-D-A Flow.svg
The lifecycle of sound from its source, through an ADC, digital processing, a DAC, and finally as sound again.

A digital audio system starts with an ADC that converts an analog signal to a digital signal. [note 1] The ADC runs at a specified sampling rate and converts at a known bit resolution. CD audio, for example, has a sampling rate of 44.1  kHz (44,100 samples per second), and has 16-bit resolution for each stereo channel. Analog signals that have not already been bandlimited must be passed through an anti-aliasing filter before conversion, to prevent the aliasing distortion that is caused by audio signals with frequencies higher than the Nyquist frequency (half the sampling rate).

A digital audio signal may be stored or transmitted. Digital audio can be stored on a CD, a digital audio player, a hard drive, a USB flash drive, or any other digital data storage device. The digital signal may be altered through digital signal processing, where it may be filtered or have effects applied. Sample-rate conversion including upsampling and downsampling may be used to conform signals that have been encoded with a different sampling rate to a common sampling rate prior to processing. Audio data compression techniques, such as MP3, Advanced Audio Coding, Ogg Vorbis, or FLAC, are commonly employed to reduce the file size. Digital audio can be carried over digital audio interfaces such as AES3 or MADI. Digital audio can be carried over a network using audio over Ethernet, audio over IP or other streaming media standards and systems.

For playback, digital audio must be converted back to an analog signal with a DAC which may use oversampling.

History

Digital audio coding

Pulse-code modulation (PCM) was invented by British scientist Alec Reeves in 1937. [2] In 1950, C. Chapin Cutler of Bell Labs filed the patent on differential pulse-code modulation (DPCM), [3] a lossless compression algorithm. Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973. [4] [5]

Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). [6] Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. [7] During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm. [6]

The discrete cosine transform (DCT), a lossy compression method pioneered by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, [8] provided the basis for the modified discrete cosine transform (MDCT), which was developed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, [9] following earlier work by Princen and Bradley in 1986. [10] Modern audio coding formats such as MP3, [11] [6] AAC and Vorbis are based on perceptual coding and MDCT algorithms.

Digital recording

PCM was used in telecommunications applications long before its first use in commercial broadcast and recording. Commercial digital recording was pioneered in Japan by NHK and Nippon Columbia and their Denon brand, in the 1960s. The first commercial digital recordings were released in 1971. [12]

The BBC also began to experiment with digital audio in the 1960s. By the early 1970s, it had developed a 2-channel recorder, and in 1972 it deployed a digital audio transmission system that linked their broadcast center to their remote transmitters. [12]

The first 16-bit PCM recording in the United States was made by Thomas Stockham at the Santa Fe Opera in 1976, on a Soundstream recorder. An improved version of the Soundstream system was used to produce several classical recordings by Telarc in 1978. The 3M digital multitrack recorder in development at the time was based on BBC technology. The first all-digital album recorded on this machine was Ry Cooder's Bop till You Drop in 1979. British record label Decca began development of its own 2-track digital audio recorders in 1978 and released the first European digital recording in 1979. [12]

Popular professional digital multitrack recorders produced by Sony/Studer (DASH) and Mitsubishi (ProDigi) in the early 1980s helped to bring about digital recording's acceptance by the major record companies. The 1982 introduction of the CD popularized digital audio with consumers. [12]

Technologies

Sony digital audio tape recorder PCM-7030 Sony PCM-7030 of DR 20111102a.jpg
Sony digital audio tape recorder PCM-7030

Digital audio is used in broadcasting of audio. Standard technologies include Digital audio broadcasting (DAB), Digital Radio Mondiale (DRM), HD Radio and In-band on-channel (IBOC).

Digital audio in recording applications is stored on audio-specific technologies including Compact disc (CD), Digital Audio Tape (DAT), Digital Compact Cassette (DCC) and MiniDisc. Digital audio may be stored in a standard audio file formats and stored on a Hard disk recorder, Blu-ray or DVD-Audio. Files may be played back on smartphones, computers or MP3 player.

Interfaces

Digital-audio-specific interfaces include:

Several interfaces are engineered to carry digital video and audio together, including HDMI and DisplayPort.

In professional architectural or installation applications, many digital audio audio over Ethernet protocols and interfaces exist.

See also

Notes

  1. Some audio signals such as those created by digital synthesis originate entirely in the digital domain, in which case analog to digital conversion does not take place.

Related Research Articles

In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information.

Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images with analog signals. Digital video comprises a series of digital images displayed in rapid succession.

Digital Audio Tape Digital cassette format developed by Sony

Digital Audio Tape is a signal recording and playback medium developed by Sony and introduced in 1987. In appearance it is similar to a Compact Cassette, using 3.81 mm / 0.15" magnetic tape enclosed in a protective shell, but is roughly half the size at 73 mm × 54 mm × 10.5 mm. The recording is digital rather than analog. DAT can record at sampling rates equal to, as well as higher and lower than a CD at 16 bits quantization. If a comparable digital source is copied without returning to the analogue domain, then the DAT will produce an exact clone, unlike other digital media such as Digital Compact Cassette or non-Hi-MD MiniDisc, both of which use a lossy data reduction system.

S/PDIF standardized digital audio connection system

S/PDIF is a type of digital audio interconnect used in consumer audio equipment to output audio over reasonably short distances. The signal is transmitted over either a coaxial cable with RCA connectors or a fibre optic cable with TOSLINK connectors. S/PDIF interconnects components in home theatres and other digital high-fidelity systems.

Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.

Sound quality

Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound.

Direct Stream Digital system for digitally recreating audible signals

DSD Records (DSD) is a trademark used by Sony and Philips for their system of digitally recreating audible signals for the Super Audio CD (SACD).

Digital audio workstation electronic system designed primarily for editing digital audio

A digital audio workstation (DAW) is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.

Near Instantaneous Companded Audio Multiplex (NICAM) is an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.

Digital recording

In digital recording, audio signals picked up by a microphone or other transducer or video signals picked up by a camera or similar device are converted into a stream of discrete numbers, representing the changes over time in air pressure for audio, and chroma and luminance values for video, then recorded to a storage device. To play back a digital sound recording, the numbers are retrieved and converted back into their original analog waveforms so that they can be heard through a loudspeaker. To play back a digital video recording, the numbers are retrieved and converted back into their original analog waveforms so that they can be viewed on a video monitor, television or other display.

The digital sound revolution refers to the widespread adoption of digital audio technology in the computer industry beginning in the 1980s.

Transcoding is the direct digital-to-digital conversion of one encoding to another, such as for movie data files, audio files, or character encoding. This is usually done in cases where a target device does not support the format or has limited storage capacity that mandates a reduced file size, or to convert incompatible or obsolete data to a better-supported or modern format.

The Digital Audio Stationary Head or DASH standard is a reel-to-reel, digital audio tape format introduced by Sony in early 1982 for high-quality multitrack studio recording and mastering, as an alternative to analog recording methods. DASH is capable of recording two channels of audio on a quarter-inch tape, and 24 or 48 tracks on 12-inch-wide (13 mm) tape on open reels of up to 14 inches. The data is recorded on the tape linearly, with a stationary recording head, as opposed to the DAT format, where data is recorded helically with a rotating head, in the same manner as a VCR. The audio data is encoded as linear PCM and boasts strong cyclic redundancy check (CRC) error correction, allowing the tape to be physically edited with a razor blade as analog tape would, e.g. by cutting and splicing, and played back with no loss of signal. In a two-track DASH recorder, the digital data is recorded onto the tape across nine data tracks: eight for the digital audio data and one for the CRC data; there is also provision for two linear analog cue tracks and one additional linear analog track dedicated to recording time code.

PCM adaptor a device that encodes digital audio as video

A PCM adaptor is a device that encodes digital audio as video for recording on a videocassette recorder. The adapter also has the ability to decode a video signal back to digital audio for playback. This digital audio system was used for mastering early compact discs.

Dolby Digital Plus, also known as Enhanced AC-3 is a digital audio compression scheme developed by Dolby Labs for transport and storage of multi-channel digital audio. It is a successor to Dolby Digital (AC-3), also developed by Dolby, and has a number of improvements including support for a wider range of data rates, increased channel count and multi-program support, and additional tools (algorithms) for representing compressed data and counteracting artifacts. While Dolby Digital (AC-3) supports up to five full-bandwidth audio channels at a maximum bitrate of 640 Kbit/s, E-AC-3 supports up to 15 full-bandwidth audio channels at a maximum bitrate of 6.144 Mbit/s.

Soundstream Inc. was the first audiophile digital audio recording company, providing commercial services for recording and computer-based editing.

Adaptive differential pulse-code modulation (ADPCM) is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio.

Sub-band coding

In signal processing, sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.

References

  1. Janssens, Jelle; Stijn Vandaele; Tom Vander Beken (2009). "The Music Industry on (the) Line? Surviving Music Piracy in a Digital Era". European Journal of Crime. 77 (96): 77–96. doi:10.1163/157181709X429105. hdl:1854/LU-608677.
  2. Genius Unrecognised, BBC, 2011-03-27, retrieved 2011-03-30
  3. USpatent 2605361,C. Chapin Cutler,"Differential Quantization of Communication Signals",issued 1952-07-29
  4. P. Cummiskey, Nikil S. Jayant, and J. L. Flanagan, "Adaptive quantization in differential PCM coding of speech", Bell Syst. Tech. J., vol. 52, pp. 1105—1118, Sept. 1973
  5. Cummiskey, P.; Jayant, Nikil S.; Flanagan, J. L. (1973). "Adaptive quantization in differential PCM coding of speech". The Bell System Technical Journal. 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x. ISSN   0005-8580.
  6. 1 2 3 Schroeder, Manfred R. (2014). "Bell Laboratories". Acoustics, Information, and Communication: Memorial Volume in Honor of Manfred R. Schroeder. Springer. p. 388. ISBN   9783319056609.
  7. Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol" (PDF). Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN   1932-8346.
  8. Nasir Ahmed; T. Natarajan; Kamisetty Ramamohan Rao (January 1974). "Discrete Cosine Transform" (PDF). IEEE Transactions on Computers. C-23 (1): 90–93. doi:10.1109/T-C.1974.223784.
  9. J. P. Princen, A. W. Johnson und A. B. Bradley: Subband/transform coding using filter bank designs based on time domain aliasing cancellation, IEEE Proc. Intl. Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2161–2164, 1987.
  10. John P. Princen, Alan B. Bradley: Analysis/synthesis filter bank design based on time domain aliasing cancellation, IEEE Trans. Acoust. Speech Signal Processing, ASSP-34 (5), 1153–1161, 1986.
  11. Guckert, John (Spring 2012). "The Use of FFT and MDCT in MP3 Audio Compression" (PDF). University of Utah . Retrieved 14 July 2019.
  12. 1 2 3 4 Fine, Thomas (2008). Barry R. Ashpole (ed.). "The Dawn of Commercial Digital Recording" (PDF). ARSC Journal. Retrieved 2010-05-02.

Further reading