In signal processing, phase distortion or phase-frequency distortion is distortion, that is, change in the shape of the waveform, that occurs when (a) a filter's phase response is not linear over the frequency range of interest, that is, the phase shift introduced by a circuit or device is not directly proportional to frequency, or (b) the zero-frequency intercept of the phase-frequency characteristic is not 0 or an integral multiple of 2π radians.
Grossly changed phase relationships, without changing amplitudes, can be audible but the degree of audibility of the type of phase shifts expected from typical sound systems remains debated.
Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio carrier wave. In amplitude modulation, the amplitude of the carrier wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied as in frequency modulation, or its phase, as in phase modulation.
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.
In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer. The carrier is higher in frequency than the modulation signal. The purpose of modulation is to impress the information on the carrier wave, which is used to carry the information to another location. In radio communication the modulated carrier is transmitted through space as a radio wave to a radio receiver. Another purpose is to transmit multiple channels of information through a single communication medium, using frequency division multiplexing (FDM). For example in cable television which uses FDM, many carrier signals carrying different television channels are transported through a single cable to customers. Since each carrier occupies a different frequency, the channels do not interfere with each other. At the destination end, the carrier signal is demodulated to extract the information bearing modulation signal.
In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver.
Distortion is the alteration of the original shape of something. In communications and electronics it means the alteration of the waveform of an information-bearing signal, such as an audio signal representing sound or a video signal representing images, in an electronic device or communication channel.
For a device such as an amplifier or telecommunications system, group delay and phase delay are device performance properties that help to characterize time delay, which is the amount of time for the various frequency components of a signal to pass through the device from input to output. If this timing does not sufficiently meet certain requirements, the device will contribute to signal distortion. For example, sufficient amounts of distortion equates to poor fidelity in video or audio, or to a high bit-error rate in a digital bit stream.
Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.
Frequency response is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. It is a measure of magnitude and phase of the output as a function of frequency, in comparison to the input. In simplest terms, if a sine wave is injected into a system at a given frequency, a linear system will respond at that same frequency with a certain magnitude and a certain phase angle relative to the input. Also for a linear system, doubling the amplitude of the input will double the amplitude of the output. In addition, if the system is time-invariant, then the frequency response also will not vary with time. Thus for LTI systems, the frequency response can be seen as applying the system's transfer function to a purely imaginary number argument representing the frequency of the sinusoidal excitation.
Audio system measurements are a means of quantifying system performance. These measurements are made for several purposes. Designers take measurements so that they can specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing.
In telecommunications, Continuous Tone-Coded Squelch System or CTCSS is one type of circuit that is used to reduce the annoyance of listening to other users on a shared two-way radio communications channel. It is sometimes referred to as tone squelch. It does this by adding a low frequency audio tone to the voice. Where more than one group of users is on the same radio frequency, CTCSS circuitry mutes those users who are using a different CTCSS tone or no CTCSS. It is sometimes referred to as a sub-channel, but this is a misnomer because no additional channels are created. All users with different CTCSS tones on the same channel are still transmitting on the identical radio frequency, and their transmissions interfere with each other; however; the interference is masked under most conditions. The CTCSS feature also does not offer any security.
Audio analysis refers to the extraction of information and meaning from audio signals for analysis, classification, storage, retrieval, synthesis, etc. The observation mediums and interpretation methods vary, as audio analysis can refer to the human ear and how people interpret the audible sound source, or it could refer to using technology such as an Audio analyzer to evaluate other qualities of a sound source such as amplitude, distortion, frequency response, and more. Once an audio source's information has been observed, the information revealed can then be processed for the logical, emotional, descriptive, or otherwise relevant interpretation by the user.
A telephone hybrid is the component at the ends of a subscriber line of the public switched telephone network (PSTN) that converts between two-wire and four-wire forms of bidirectional audio paths. When used in broadcast facilities to enable the airing of telephone callers, the broadcast-quality telephone hybrid is known as a broadcast telephone hybrid or telephone balance unit.
A pulse-Doppler radar is a radar system that determines the range to a target using pulse-timing techniques, and uses the Doppler effect of the returned signal to determine the target object's velocity. It combines the features of pulse radars and continuous-wave radars, which were formerly separate due to the complexity of the electronics.
When describing a periodic function in the time domain, the DC bias, DC component, DC offset, or DC coefficient is the mean amplitude of the waveform. If the mean amplitude is zero, there is no DC bias. A waveform with no DC bias is known as a DC balanced or DC free waveform.
Pre-echo, sometimes called a forward echo, is a digital audio compression artifact where a sound is heard before it occurs. It is most noticeable in impulsive sounds from percussion instruments such as castanets or cymbals.
In radio, a detector is a device or circuit that extracts information from a modulated radio frequency current or voltage. The term dates from the first three decades of radio (1888-1918). Unlike modern radio stations which transmit sound on an uninterrupted carrier wave, early radio stations transmitted information by radiotelegraphy. The transmitter was switched on and off to produce long or short periods of radio waves, spelling out text messages in Morse code. Therefore, early radio receivers had only to distinguish between the presence or absence of a radio signal. The device that performed this function in the receiver circuit was called a detector. A variety of different detector devices, such as the coherer, electrolytic detector, magnetic detector and the crystal detector, were used during the wireless telegraphy era until superseded by vacuum tube technology.
Loudspeaker measurement is the practice of determining the behaviour of loudspeakers by measuring various aspects of performance. This measurement is especially important because loudspeakers, being transducers, have a higher level of distortion than other audio system components used in playback or sound reinforcement.
Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear medium which acts, intentionally or unintentionally, as a demodulator.
A lattice phase equaliser or lattice filter is an example of an all-pass filter. That is, the attenuation of the filter is constant at all frequencies but the relative phase between input and output varies with frequency. The lattice filter topology has the particular property of being a constant-resistance network and for this reason is often used in combination with other constant resistance filters such as bridge-T equalisers. The topology of a lattice filter, also called an X-section is identical to bridge topology. The lattice phase equaliser was invented by Otto Zobel. using a filter topology proposed by George Campbell.
Tube sound is the characteristic sound associated with a vacuum tube amplifier, a vacuum tube-based audio amplifier. At first, the concept of tube sound did not exist, because practically all electronic amplification of audio signals was done with vacuum tubes and other comparable methods were not known or used. After introduction of solid state amplifiers, tube sound appeared as the logical complement of transistor sound, which had some negative connotations due to crossover distortion in early transistor amplifiers. The audible significance of tube amplification on audio signals is a subject of continuing debate among audio enthusiasts.
|This physics-related article is a stub. You can help Wikipedia by expanding it.|
|This sound technology article is a stub. You can help Wikipedia by expanding it.|