Autocorrelation matrix

From Wikipedia, the free encyclopedia - View original article

 
Jump to: navigation, search

The autocorrelation matrix is used in various digital signal processing algorithms. It consists of elements of the discrete autocorrelation function, R_{xx}(j) arranged in the following manner:

\mathbf{R}_x = E[\mathbf{xx}^H] = \begin{bmatrix} R_{xx}(0) & R^*_{xx}(1) & R^*_{xx}(2) & \cdots & R^*_{xx}(N-1) \\ R_{xx}(1) & R_{xx}(0) & R^*_{xx}(1) & \cdots & R^*_{xx}(N-2) \\ R_{xx}(2) & R_{xx}(1) & R_{xx}(0) & \cdots & R^*_{xx}(N-3) \\ \vdots    & \vdots    & \vdots    & \ddots & \vdots \\ R_{xx}(N-1) & R_{xx}(N-2) & R_{xx}(N-3) & \cdots & R_{xx}(0) \\ \end{bmatrix}

This is clearly a Hermitian matrix and a Toeplitz matrix. If \mathbf{x} is wide-sense stationary then its autocorrelation matrix will be nonnegative definite.

The autocovariance matrix is related to the autocorrelation matrix as follows:

 \mathbf{C}_x = \operatorname{E} [(\mathbf{x} - \mathbf{m}_x)(\mathbf{x} - \mathbf{m}_x)^H] =  \mathbf{R}_x - \mathbf{m}_x\mathbf{m}_x^H

Where \mathbf{m}_x is a vector giving the mean of signal \mathbf{x} at each index of time.

References[edit]