This article may be confusing or unclear to readers.(December 2021) |
Colors of noise |
---|
In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly different properties. For example, as audio signals they will sound different to human ears, and as images they will have a visibly different texture. Therefore, each application typically requires noise of a specific color. This sense of 'color' for noise signals is similar to the concept of timbre in music (which is also called "tone color"; however, the latter is almost always used for sound, and may consider detailed features of the spectrum).
The practice of naming kinds of noise after colors started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was (incorrectly) assumed to have such a flat power spectrum over the visible range.[ citation needed ] Other color names, such as pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the color of light with similar spectra. Some of those names have standard definitions in certain disciplines, while others are informal and poorly defined. Many of these definitions assume a signal with components at all frequencies, with a power spectral density per unit of bandwidth proportional to 1/f β and hence they are examples of power-law noise. For instance, the spectral density of white noise is flat (β = 0), while flicker or pink noise has β = 1, and Brownian noise has β = 2. Blue noise has β = -1.
Various noise models are employed in analysis, many of which fall under the above categories. AR noise or "autoregressive noise" is such a model, and generates simple examples of the above noise types, and more. The Federal Standard 1037C Telecommunications Glossary [1] [2] defines white, pink, blue, and black noise.
The color names for these different types of sounds are derived from a loose analogy between the spectrum of frequencies of sound wave present in the sound (as shown in the blue diagrams) and the equivalent spectrum of light wave frequencies. That is, if the sound wave pattern of "blue noise" were translated into light waves, the resulting light would be blue, and so on.[ citation needed ]
White noise is a signal (or process), named by analogy to white light, with a flat frequency spectrum when plotted as a linear function of frequency (e.g., in Hz). In other words, the signal has equal power in any band of a given bandwidth (power spectral density) when the bandwidth is measured in Hz. For example, with a white noise audio signal, the range of frequencies between 40 Hz and 60 Hz contains the same amount of sound power as the range between 400 Hz and 420 Hz, since both intervals are 20 Hz wide. Note that spectra are often plotted with a logarithmic frequency axis rather than a linear one, in which case equal physical widths on the printed or displayed plot do not all have the same bandwidth, with the same physical width covering more Hz at higher frequencies than at lower frequencies. In this case a white noise spectrum that is equally sampled in the logarithm of frequency (i.e., equally sampled on the X axis) will slope upwards at higher frequencies rather than being flat. However it is not unusual in practice for spectra to be calculated using linearly-spaced frequency samples but plotted on a logarithmic frequency axis, potentially leading to misunderstandings and confusion if the distinction between equally spaced linear frequency samples and equally spaced logarithmic frequency samples is not kept in mind. [3]
The frequency spectrum of pink noise is linear in logarithmic scale; it has equal power in bands that are proportionally wide. [4] This means that pink noise would have equal power in the frequency range from 40 to 60 Hz as in the band from 4000 to 6000 Hz. Since humans hear in such a proportional space, where a doubling of frequency (an octave) is perceived the same regardless of actual frequency (40–60 Hz is heard as the same interval and distance as 4000–6000 Hz), every octave contains the same amount of energy and thus pink noise is often used as a reference signal in audio engineering. The spectral power density, compared with white noise, decreases by 3.01 dB per octave (10 dB per decade); density proportional to 1/f. For this reason, pink noise is often called "1/f noise".
Since there are an infinite number of logarithmic bands at both the low frequency (DC) and high frequency ends of the spectrum, any finite energy spectrum must have less energy than pink noise at both ends. Pink noise is the only power-law spectral density that has this property: all steeper power-law spectra are finite if integrated to the high-frequency end, and all flatter power-law spectra are finite if integrated to the DC, low-frequency limit.[ citation needed ]
Brownian noise, also called Brown noise, is noise with a power density which decreases 6.02 dB per octave (20 dB per decade) with increasing frequency (frequency density proportional to 1/f2) over a frequency range excluding zero (DC). It is also called "red noise", with pink being between red and white.
Brownian noise can be generated with temporal integration of white noise. "Brown" noise is not named for a power spectrum that suggests the color brown; rather, the name derives from Brownian motion, also known as "random walk" or "drunkard's walk".
Blue noise is also called azure noise. Blue noise's power density increases 3.01 dB per octave with increasing frequency (density proportional to f ) over a finite frequency range. [5] In computer graphics, the term "blue noise" is sometimes used more loosely as any noise with minimal low frequency components and no concentrated spikes in energy. This can be good noise for dithering. [6] Retinal cells are arranged in a blue-noise-like pattern which yields good visual resolution. [7]
Cherenkov radiation is a naturally occurring example of almost perfect blue noise, with the power density growing linearly with frequency over spectrum regions where the permeability of index of refraction of the medium are approximately constant. The exact density spectrum is given by the Frank–Tamm formula. In this case, the finiteness of the frequency range comes from the finiteness of the range over which a material can have a refractive index greater than unity. Cherenkov radiation also appears as a bright blue color, for these reasons.
Violet noise is also called purple noise. Violet noise's power density increases 6.02 dB per octave with increasing frequency [8] [9] "The spectral analysis shows that GPS acceleration errors seem to be violet noise processes. They are dominated by high-frequency noise." (density proportional to f 2) over a finite frequency range. It is also known as differentiated white noise, due to its being the result of the differentiation of a white noise signal.
Due to the diminished sensitivity of the human ear to high-frequency hiss and the ease with which white noise can be electronically differentiated (high-pass filtered at first order), many early adaptations of dither to digital audio used violet noise as the dither signal.[ citation needed ]
Acoustic thermal noise of water has a violet spectrum, causing it to dominate hydrophone measurements at high frequencies. [10] "Predictions of the thermal noise spectrum, derived from classical statistical mechanics, suggest increasing noise with frequency with a positive slope of 6.02 dB octave−1." "Note that thermal noise increases at the rate of 20 dB decade−1" [11]
Grey noise is random white noise subjected to a psychoacoustic equal loudness curve (such as an inverted A-weighting curve) over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies.[ citation needed ] This is in contrast to standard white noise which has equal strength over a linear scale of frequencies but is not perceived as being equally loud due to biases in the human equal-loudness contour.
Velvet noise is a sparse sequence of random positive and negative impulses. Velvet noise is typically characterised by its density in taps/second. At high densities it sounds similar to white noise; however, it is perceptually "smoother". [12] The sparse nature of velvet noise allows for efficient time-domain convolution, making velvet noise particularly useful for applications where computational resources are limited, like real-time reverberation algorithms. [13] [14] Velvet noise is also frequently used in decorrelation filters. [15]
There are also many colors used without precise definitions (or as synonyms for formally defined colors), sometimes with multiple definitions.
In telecommunication, the term noisy white has the following meanings: [24]
In telecommunication, the term noisy black has the following meanings: [25]
Colored noise can be computer-generated by first generating a white noise signal, Fourier-transforming it, then multiplying the amplitudes of the different frequency components with a frequency-dependent function. [26] Matlab programs are available to generate power-law colored noise in one or any number of dimensions.
Identifying the dominant noise type in a time series has many applications including clock stability analysis and market forecasting. There are two algorithms based on autocorrelation functions that can identify the dominant noise type in a data set provided the noise type has a power law spectral density.
The first method for doing noise identification is based on a paper by W.J Riley and C.A Greenhall. [27] First the lag(1) autocorrelation function is computed and checked to see if it is less than one third (which is the threshold for a stationary process):
where is the number of data points in the time series, are the phase or frequency values, and is the average value of the time series. If used for clock stability analysis, the values are the non-overlapped (or binned) averages of the original frequency or phase array for some averaging time and factor. Now discrete-time fractionally integrated noises have power spectral densities of the form which are stationary for . The value of is calculated using :
where is the lag(1) autocorrelation function defined above. If then the first differences of the adjacent time series data are taken times until . The power law for the stationary noise process is calculated from the calculated and the number of times the data has been differenced to achieve as follows:
where is the power of the frequency noise which can be rounded to identify the dominant noise type (for frequency data is the power of the frequency noise but for phase data the power of the frequency noise is ).
This method improves on the accuracy of the previous method and was introduced by Z. Chunlei, Z. Qi, Y. Shuhuana. Instead of using the lag(1) autocorrelation function the lag(m) correlation function is computed instead: [28]
where is the "lag" or shift between the time series and the delayed version of itself. A major difference is that are now the averaged values of the original time series computed with a moving window average and averaging factor also equal to . The value of is computed the same way as in the pervious method and is again the criteria for a stationary process. The other major difference between this and the previous method is that the differencing used to make the time series stationary () is done between values that are spaced a distance apart:
The value of the power is calculated the same as the previous method as well.
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz.
In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers speak of the phase noise of an oscillator, whereas digital-system engineers work with the jitter of a clock.
Specific detectivity, or D*, for a photodetector is a figure of merit used to characterize performance, equal to the reciprocal of noise-equivalent power (NEP), normalized per square root of the sensor's area and frequency bandwidth.
In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used with this or similar meanings in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, not to any specific signal. White noise draws its name from white light, although light that appears white generally does not have a flat power spectral density over the visible band.
Pink noise, 1⁄f noise, fractional noise or fractal noise is a signal or process with a frequency spectrum such that the power spectral density is inversely proportional to the frequency of the signal. In pink noise, each octave interval carries an equal amount of noise energy.
Johnson–Nyquist noise is the electronic noise generated by the thermal agitation of the charge carriers inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise is proportional to absolute temperature, so some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to improve their signal-to-noise ratio. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.
The short-time Fourier transform (STFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.
The sensitivity of an electronic device, such as a communications system receiver, or detection device, such as a PIN diode, is the minimum magnitude of input signal required to produce a specified output signal having a specified signal-to-noise ratio, or other specified criteria. In general, it is the signal level required for a particular quality of received information.
In science, Brownian noise, also known as Brown noise or red noise, is the type of signal noise produced by Brownian motion, hence its alternative name of random walk noise. The term "Brown noise" does not come from the color, but after Robert Brown, who documented the erratic motion for multiple types of inanimate particles in water. The term "red noise" comes from the "white noise"/"white light" analogy; red noise is strong in longer wavelengths, similar to the red end of the visible spectrum.
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
In digital communication or data transmission, is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.
Quantum noise is noise arising from the indeterminate state of matter in accordance with fundamental principles of quantum mechanics, specifically the uncertainty principle and via zero-point energy fluctuations. Quantum noise is due to the apparently discrete nature of the small quantum constituents such as electrons, as well as the discrete nature of quantum effects, such as photocurrents.
A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.
The autocorrelation technique is a method for estimating the dominating frequency in a complex signal, as well as its variance. Specifically, it calculates the first two moments of the power spectrum, namely the mean and variance. It is also known as the pulse-pair algorithm in radar theory.
In communications, noise spectral density (NSD), noise power density, noise power spectral density, or simply noise density (N0) is the power spectral density of noise or the noise power per unit of bandwidth. It has dimension of power over frequency, whose SI unit is watt per hertz (equivalent to watt-second or joule). It is commonly used in link budgets as the denominator of the important figure-of-merit ratios, such as carrier-to-noise-density ratio as well as Eb/N0 and Es/N0.
Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse.
In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
The concept of a linewidth is borrowed from laser spectroscopy. The linewidth of a laser is a measure of its phase noise. The spectrogram of a laser is produced by passing its light through a prism. The spectrogram of the output of a pure noise-free laser will consist of a single infinitely thin line. If the laser exhibits phase noise, the line will have non-zero width. The greater the phase noise, the wider the line. The same will be true with oscillators. The spectrum of the output of a noise-free oscillator has energy at each of the harmonics of the output signal, but the bandwidth of each harmonic will be zero. If the oscillator exhibits phase noise, the harmonics will not have zero bandwidth. The more phase noise the oscillator exhibits, the wider the bandwidth of each harmonic.
In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduced it in his PhD thesis in 1951. It is commonly used in time series analysis and signal processing for parameter estimation and signal detection.
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022.