From Wikipedia, the free encyclopedia  View original article
It has been suggested that Frequency spectrum be merged into this article. (Discuss) Proposed since September 2013. 
This article needs additional citations for verification. (May 2008) 
In statistical signal processing, statistics, and physics, the spectrum of a timeseries or signal is a positive real function of a frequency variable associated with a stationary stochastic process, or a deterministic function of time, which has dimensions of power per hertz (Hz), or energy per hertz. Intuitively, the spectrum decomposes the content of a stochastic process into different frequencies present in that process, and helps identify periodicities. More specific terms which are used are the power spectrum, spectral density, power spectral density, or energy spectral density.
In physics, the signal is usually a wave, such as an electromagnetic wave, random vibration, or an acoustic wave. The spectral density of the wave, when multiplied by an appropriate factor, will give the power carried by the wave, per unit frequency, known as the power spectral density (PSD) of the signal. Power spectral density is commonly expressed in watts per hertz (W/Hz).^{[1]}
For voltage signals, it is customary to use units of V^{2} Hz^{−1} for the PSD and V^{2} s Hz^{−1} for the ESD (energy spectral density).^{[2]} Often it is convenient to work with an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz^{−1/2}.^{[3]} For random vibration analysis, units of g^{2} Hz^{−1} are sometimes used for the PSD of acceleration. Here g denotes the gforce.^{[4]}
Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time.
The phrase time series has been defined as "... a collection of observations made sequentially in time."^{[5]} But it is also used to refer to a stochastic process that functions as the underlying theoretical model for the process that generated the data (and thus includes consideration of all the other possible sequences of data that could have been observed, but were not). Furthermore, the 'time' can be either continuous or discrete. There are, therefore, four different but closely related definitions and formulas for the power spectrum of a time series.
If (discrete time) or (continuous time) is a stochastic process, we will refer to a possible time series of data coming from it as a sample or path or signal of the stochastic process. To avoid confusion, we will reserve the word process for a stochastic process, and use one of the words signal, or sample, to refer to a time series of data.
For any random variable , standard notations of angle brackets or will be used for ensemble average, also known as statistical expectation, and for the theoretical variance.
Suppose , from to is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):
The variance of is, for a zeromean function as above, given by . If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as . If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.
Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
and
But obviously the root mean square of either or is , so the variance of is and that of is . Hence, the power of which comes from the component with frequency is . All these contributions add up to the power of .
Then the power as a function of frequency is obviously , and its statistical cumulative distribution function will be
is a step function, monotonically nondecreasing. Its jumps occur at the frequencies of the periodic components of , and the value of each jump is the power or variance of that component.
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of , we can take the covariance of with , and define this to be the autocorrelation function of the signal (or data) :
When it exists, it is an even function of . If the average power is bounded, then exists everywhere, is finite, and is bounded by , which is the power or variance of the data.
It is elementary to show that can be decomposed into periodic components with the same periods as :
This is in fact the spectral decomposition of over the different frequencies, and is obviously related to the distribution of power of over the frequencies: the amplitude of a frequency component of is its contribution to the power of the signal.
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing; that is, the energy of a signal is^{[6]}
The energy spectral density is most suitable for transients—that is, pulselike signals—having a finite total energy. In this case, Parseval's theorem gives us an alternate expression for the energy of the signal in terms of its Fourier transform, :^{[6]}
Here is the angular frequency. Since the integral on the righthand side is the energy of the signal, the integrand can be interpreted as a density function describing the energy per unit frequency contained in the signal at frequency . In light of this, the energy spectral density of a signal is defined as^{[6]}^{[N 1]}
As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V^{2} Ω^{−1}, the energy has units of V^{2} s Ω^{−1} = J, and hence the estimate of the energy spectral density has units of J Hz^{−1}, as required. In many situations, it is common to forgo the step of dividing by so that the energy spectral density instead has units of V^{2} s Hz^{−1}.
This definition generalizes in a straightforward manner to a discrete signal with an infinite number of values such as a signal sampled at discrete times :
where is the discrete Fourier transform of The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit ; however, in the mathematical sciences, the interval is often set to 1.
The above definition of energy spectral density is most suitable for transients, i.e., pulselike signals, for which the Fourier transforms of the signals exist. For continued signals that describe, for example, stationary physical processes, it makes more sense to define a power spectral density (PSD), which describes how the power of a signal or time series is distributed over the different frequencies, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal. The total power P of a signal is the following time average:
The power of a signal may be finite even if the energy is infinite. For example, a 10volt power supply connected to a 1 kΩ resistor delivers (10 V)^{2} / (1 kΩ) = 0.1 W of power at any given time; however, if the supply is allowed to operate for an infinite amount of time, it will deliver an infinite amount of energy (0.1 J each second for an infinite number of seconds).
In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest this Fourier transform does not exist.^{[N 2]} Because of this, it is advantageous to work with a truncated Fourier transform , where the signal is integrated only over a finite interval [0, T]:
Then the power spectral density can be defined as^{[8]}^{[9]}
Here E denotes the expected value; explicitly, we have^{[9]}
Using such formal reasoning, one may already guess that for a stationary random process, the power spectral density and the autocorrelation function of this signal should be a Fourier transform pair. Provided that is absolutely integrable, which is not always true, then
The Wiener–Khinchin theorem makes sense of this formula for any widesense stationary process under weaker hypotheses: does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
Many authors use this equality to actually define the power spectral density.^{[10]}
The power of the signal in a given frequency band can be calculated by integrating over positive and negative frequencies,
where is the integrated spectrum whose derivative is .
More generally, similar techniques may be used to estimate a timevarying spectral density.^{[citation needed]}
The definition of the power spectral density generalizes in a straightforward manner to finite timeseries with , such as a signal sampled at discrete times for a total measurement period .
In a realworld application, one would typically average this singlemeasurement PSD over several repetitions of the measurement to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called periodogram. One can prove that this periodogram converges to the true PSD when the averaging time interval T goes to infinity (Brown & Hwang^{[11]}) to approach the Power Spectral Density (PSD).
If two signals both possess power spectral densities, then a crossspectral density can be calculated by using their crosscorrelation function.
Some properties of the PSD include:^{[12]}
The integrated spectrum or power spectral distribution is defined as^{[13]}
Given two signals and , each of which possess power spectral densities and , it is possible to define a crossspectral density (CSD) given by
The crossspectral density (or 'cross power spectrum') is thus the Fourier transform of the crosscorrelation function.
where is the crosscorrelation of and .
By an extension of the Wiener–Khinchin theorem, the Fourier transform of the crossspectral density is the crosscovariance function.^{[14]} In light of this, the PSD is seen to be a special case of the CSD for .
For discrete signals x_{n} and y_{n}, the relationship between the crossspectral density and the crosscovariance is
The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or nonparametric approaches, and may be based on timedomain or frequencydomain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common nonparametric technique is the periodogram.
The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.
The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive [remote sensing] technology. Much effort has been expended and millions of dollars spent on developing and producing electronic instruments called "spectrum analyzers" for aiding electrical engineers and technicians in observing and measuring the power spectra of signals. The cost of a spectrum analyzer varies depending on its frequency range, its bandwidth, and its accuracy. The higher the frequency range (Sband, Cband, Xband, Kuband, Kband, Kaband, etc.), the more difficult the components are to make, assemble, and test and the more expensive the spectrum analyzer is. Also, the wider the bandwidth that a spectrum analyzer possesses, the more costly that it is, and the capability for more accurate measurements increases costs as well.
The spectrum analyzer measures the magnitude of the shorttime Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. These devices work in low frequencies and with small bandwidths.
See Coherence (signal processing) for use of the crossspectral density.
