This article needs additional citations for verification .(December 2006) |
In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically
And the energy of a discrete-time signal x(n) is defined mathematically as
Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other:
For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm,
which is equivalent to joules, the SI unit for energy as defined in the physical sciences.
Similarly, the spectral energy density of signal x(t) is
where X(f) is the Fourier transform of x(t).
For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI units for spectral energy density.
As a consequence of Parseval's theorem, one can prove that the signal energy is always equal to the summation across all frequency components of the signal's spectral energy density.
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
Power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equal to one joule per second. Power is a scalar quantity.
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
Ohm's law states that the electric current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship:
The electrical resistance of an object is a measure of its opposition to the flow of electric current. Its reciprocal quantity is electrical conductance, measuring the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with mechanical friction. The SI unit of electrical resistance is the ohm, while electrical conductance is measured in siemens (S).
The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series.
In electrical circuits, reactance is the opposition presented to alternating current by inductance and capacitance. Along with resistance, it is one of two elements of impedance; however, while both elements involve transfer of electrical energy, no dissipation of electrical energy as heat occurs in reactance; instead, the reactance stores energy until a quarter-cycle later when the energy is returned to the circuit. Greater reactance gives smaller current for the same applied voltage.
A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi.
Johnson–Nyquist noise is the electronic noise generated by the thermal agitation of the charge carriers inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise is proportional to absolute temperature, so some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to improve their signal-to-noise ratio. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.
In electrical engineering, impedance matching is the practice of designing or adjusting the input impedance or output impedance of an electrical device for a desired value. Often, the desired value is selected to maximize power transfer or minimize signal reflection. For example, impedance matching typically is used to improve power transfer from a radio transmitter via the interconnecting transmission line to the antenna. Signals on a transmission line will be transmitted without reflections if the transmission line is terminated with a matching impedance.
The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons by the relatively immobile ions in the metal that act like obstructions to the flow of electrons.
Joule heating is the process by which the passage of an electric current through a conductor produces heat.
In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).
In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.
Thiele/Small parameters are a set of electromechanical parameters that define the specified low frequency performance of a loudspeaker driver. These parameters are published in specification sheets by driver manufacturers so that designers have a guide in selecting off-the-shelf drivers for loudspeaker designs. Using these parameters, a loudspeaker designer may simulate the position, velocity and acceleration of the diaphragm, the input impedance and the sound output of a system comprising a loudspeaker and enclosure. Many of the parameters are strictly defined only at the resonant frequency, but the approach is generally applicable in the frequency range where the diaphragm motion is largely pistonic, i.e., when the entire cone moves in and out as a unit without cone breakup.
In digital communication or data transmission, is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.
Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning.
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.