# Invertible matrix

In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular or nondegenerate) if there exists an n-by-n square matrix B such that

$\mathbf{AB} = \mathbf{BA} = \mathbf{I}_n \$

where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A−1.

A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0. Singular matrices are rare in the sense that a square matrix randomly selected from a continuous uniform distribution on its entries will almost never be singular.

Non-square matrices (m-by-n matrices for which m ≠ n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n, then A has a left inverse: an n-by-m matrix B such that BA = I. If A has rank m, then it has a right inverse: an n-by-m matrix B such that AB = I.

Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.

While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any commutative ring. However, in this case the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a much stricter requirement than being nonzero. The conditions for existence of left-inverse resp. right-inverse are more complicated since a notion of rank does not exist over rings.

## Properties

### The invertible matrix theorem

Let A be a square n by n matrix over a field K (for example the field R of real numbers). The following statements are equivalent, i.e., for any given matrix they are either all true or all false:

A is invertible, i.e. A has an inverse, is nonsingular, or is nondegenerate.
A is row-equivalent to the n-by-n identity matrix In.
A is column-equivalent to the n-by-n identity matrix In.
A has n pivot positions.
det A ≠ 0. In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit in that ring.
A has full rank; that is, rank A = n.
The equation Ax = 0 has only the trivial solution x = 0
Null A = {0}
The equation Ax = b has exactly one solution for each b in Kn.
The columns of A are linearly independent.
The columns of A span Kn
Col A = Kn
The columns of A form a basis of Kn.
The linear transformation mapping x to Ax is a bijection from Kn to Kn.
There is an n by n matrix B such that AB = In = BA.
The transpose AT is an invertible matrix (hence rows of A are linearly independent, span Kn, and form a basis of Kn).
The number 0 is not an eigenvalue of A.
The matrix A can be expressed as a finite product of elementary matrices.
The matrix A has a left inverse (i.e. there exists a B such that BA = I) or a right inverse (i.e. there exists a C such that AC = I), in which case both left and right inverses exist and B = C = A-1.

### Other properties

Furthermore, the following properties hold for an invertible matrix A:

• (A−1)−1 = A;
• (kA)−1 = k−1A−1 for nonzero scalar k;
• (AT)−1 = (A−1)T;
• For any invertible n-by-n matrices A and B, (AB)−1 = B−1A−1. More generally, if A1,...,Ak are invertible n-by-n matrices, then (A1A2Ak−1Ak)−1 = Ak−1Ak−1−1A2−1A1−1;
• det(A−1) = det(A)−1.

A matrix that is its own inverse, i.e. A = A−1 and A2 = I, is called an involution.

### In relation to the identity matrix

It follows from the theory of matrices that if

$\mathbf{AB} = \mathbf{I} \$

for finite square matrices A and B, then also

$\mathbf{BA} = \mathbf{I}\$[1]

### Density

Over the field of real numbers, the set of singular n-by-n matrices, considered as a subset of Rn×n, is a null set, i.e., has Lebesgue measure zero. This is true because singular matrices are the roots of the polynomial function in the entries of the matrix given by the determinant. Thus in the language of measure theory, almost all n-by-n matrices are invertible.

Furthermore the n-by-n invertible matrices are a dense open set in the topological space of all n-by-n matrices. Equivalently, the set of singular matrices is closed and nowhere dense in the space of n-by-n matrices.

In practice however, one may encounter non-invertible matrices. And in numerical calculations, matrices which are invertible, but close to a non-invertible matrix, can still be problematic; such matrices are said to be ill-conditioned.

## Methods of matrix inversion

### Gaussian elimination

Gauss–Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the LU decomposition which generates upper and lower triangular matrices which are easier to invert.

### Newton's method

A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient, if it is convenient to find a suitable starting seed:

$X_{k+1} = 2X_k - X_k A X_k.$

Victor Pan and John Reif have done work that includes ways of generating a starting seed.[2] [3] Byte magazine summarised one of their approaches as follows (box with equations 8 and 9 not shown):[4]-

The Pan-Reif breakthrough consists of the discovery of a simple and reliable way of evaluating B0, the initial approximation to A-1, which can safely be used for starting Newton's iteration or variants of it. Readers interested in a derivation can consult the references. I give here merely one example of the results. Let me denote the "Hermitian transpose" of A by AH. That is, if A(I,J) is the element in the Ith row and Jth column of the A matrix, then A*(J,I) is the element at the corresponding position in the matrix A*. Here the star denotes complex conjugate (i.e., if an element is $x + iy$, where x and y are real numbers, then the complex conjugate of that element is $x - iy$). If, as is the case to which I have limited all my own calculations, the elements of A are all real, then AH is just the transposed matrix AT of A (wherein elements are interchanged or "reflected" with respect to the main diagonal).
We now introduce a number t, defined by equation 8. In words: We consider the magnitudes of the various elements A(I,J) of the given A matrix that is to be inverted. (In the case of a complex element $x + iy$, its magnitude is $\sqrt{x^2 + y^2}$. In the case of a real element, it is just its unsigned or absolute value.) We add up the magnitudes of the elements in a given row and record the sum. We do the same for the remaining rows and then compare the sums thus obtained. The largest of these row sums - just a number - is designated $\max_{I} \sum_{J} \left\vert A(I,J) \right\vert$. We do the same for the column sums and take the product of these two maxima, designating its reciprocal as the real number t. Finally, we define our initial approximate inverse matrix B0 as shown in equation 9.
That is, the number t multiplies every element of the Hermitian transpose of the A matrix. Pan and Reif give alternative forms, but this will do. And that's all there is to it.

Otherwise, the method may be adapted to use the starting seed from a trivial starting case by using a homotopy to "walk" in small steps from that to the matrix needed, "dragging" the inverses with them:

$X_{k+1} = 2X_k - X_k A_{k+1} X_k,$ where $A_0 = S,$ $X_0 = S^{-1},$ and $A_N = A$ for some terminating N, perhaps followed by another few iterations at A to settle the inverse.

Using this simplistically on real valued matrices would lead the homotopy through a degenerate matrix about half the time, so complex valued matrices should be used to bypass that, e.g. by using a starting seed S that has i in the first entry, 1 on the rest of the leading diagonal, and 0 elsewhere. If complex arithmetic is not directly available, it may be emulated at a small cost in computer memory by replacing each complex matrix element a+bi with a 2×2 real valued submatrix of the form $\begin{pmatrix} a & b \\ -b & a \\ \end{pmatrix}$ (see square root of a matrix).

Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix, e.g. the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman-Beavers iteration; this may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. Newton's method is also useful for "touch up" corrections to the Gauss–Jordan algorithm which has been contaminated by small errors due to imperfect computer arithmetic.

### Cayley–Hamilton method

Cayley–Hamilton theorem allows to represent the inverse of A in terms of det(A), traces and powers of A

$\mathbf{A}^{-1} = \frac{1}{\det (\mathbf{A})}\sum_{s=0}^{n-1}\mathbf{A}^{s}\sum_{k_1,k_2,\ldots ,k_{n-1}}\prod_{l=1}^{n-1} \frac{(-1)^{k_l+1}}{l^{k_l}k_{l}!}\mathrm{tr}(\mathbf{A}^l)^{k_l},$

where n is dimension of A, and the sum is taken over s and the sets of all kl ≥ 0 satisfying the linear Diophantine equation

$s+\sum_{l=1}^{n-1}lk_{l} = n - 1.$

### Eigendecomposition

Main article: Eigendecomposition

If matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is nonsingular and its inverse is given by

$\mathbf{A}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1}$

where Q is the square (N×N) matrix whose ith column is the eigenvector $q_i$ of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e., $\Lambda_{ii}=\lambda_i$. Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate:

$\left[\Lambda^{-1}\right]_{ii}=\frac{1}{\lambda_i}$

### Cholesky decomposition

If matrix A is positive definite, then its inverse can be obtained as

$\mathbf{A}^{-1} = (\mathbf{L}^{*})^{-1} \mathbf{L}^{-1} ,$

where L is the lower triangular Cholesky decomposition of A, and L* denotes the conjugate transpose of L.

### Analytic solution

Main article: Cramer's rule

Writing the transpose of the matrix of cofactors, known as an adjugate matrix, can also be an efficient way to calculate the inverse of small matrices, but this recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors:

$\mathbf{A}^{-1}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\mathbf{C}^{\mathrm{T}}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}} \begin{pmatrix} \mathbf{C}_{11} & \mathbf{C}_{21} & \cdots & \mathbf{C}_{n1} \\ \mathbf{C}_{12} & \mathbf{C}_{22} & \cdots & \mathbf{C}_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{C}_{1n} & \mathbf{C}_{2n} & \cdots & \mathbf{C}_{nn} \\ \end{pmatrix}$

so that

$\left(\mathbf{A}^{-1}\right)_{ij}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}^{\mathrm{T}}\right)_{ij}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}_{ji}\right)$

where |A| is the determinant of A, C is the matrix of cofactors, and CT represents the matrix transpose.

#### Inversion of 2×2 matrices

The cofactor equation listed above yields the following result for 2×2 matrices. Inversion of these matrices can be done easily as follows:[5]

$\mathbf{A}^{-1} = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} \,\,\,d & \!\!-b \\ -c & \,a \\ \end{bmatrix} = \frac{1}{ad - bc} \begin{bmatrix} \,\,\,d & \!\!-b \\ -c & \,a \\ \end{bmatrix}.$

This is possible because 1/(ad-bc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes.

The Cayley–Hamilton method gives

$\mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \left( \mathrm{tr}\mathbf{A} \right) \mathbf{I} - \mathbf{A}\right].$

#### Inversion of 3×3 matrices

A computationally efficient 3x3 matrix inversion is given by

$\mathbf{A}^{-1} = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i\\ \end{bmatrix}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} \, A & \, B & \,C \\ \, D & \, E & \, F \\ \, G & \, H & \, I\\ \end{bmatrix}^T = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} \, A & \, D & \,G \\ \, B & \, E & \,H \\ \, C & \,F & \, I\\ \end{bmatrix}$

where the determinant of A can be computed by applying the rule of Sarrus as follows:

$\det(\mathbf{A}) = a(ei-fh)-b(id-fg)+c(dh-eg).$

If the determinant is non-zero, the matrix is invertible, with the elements of the above matrix on the right side given by

$\begin{matrix} A = (ei-fh) & D = -(bi-ch) & G = (bf-ce) \\ B = -(di-fg) & E = (ai-cg) & H = -(af-cd) \\ C = (dh-eg) & F = -(ah-bg) & I = (ae-bd) \\ \end{matrix}$

The Cayley–Hamilton decomposition gives

$\mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \frac{1}{2}\left( (\mathrm{tr}\mathbf{A})^{2}-\mathrm{tr}\mathbf{A}^{2}\right) \mathbf{I} -\mathbf{A}\mathrm{tr}\mathbf{A}+\mathbf{A}^{2}\right].$

The general 3×3 inverse can be expressed concisely in terms of the cross product and triple product:

If a matrix $\mathbf{A}=\left[\mathbf{x_0},\;\mathbf{x_1},\;\mathbf{x_2}\right]$ (consisting of three column vectors, $\mathbf{x_0}$, $\mathbf{x_1}$, and $\mathbf{x_2}$) is invertible, its inverse is given by

$\mathbf{A}^{-1}=\frac{1}{\det(\mathbf A)}\begin{bmatrix} {(\mathbf{x_1}\times\mathbf{x_2})}^{T} \\ {(\mathbf{x_2}\times\mathbf{x_0})}^{T} \\ {(\mathbf{x_0}\times\mathbf{x_1})}^{T} \\ \end{bmatrix}.$

Note that $\det(A)$ is equal to the triple product of $\mathbf{x_0}$, $\mathbf{x_1}$, and $\mathbf{x_2}$—the volume of the parallelepiped formed by the rows or columns:

$\det(\mathbf{A})=\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2}).$

The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of $\mathbf{A}^{-1}$ is orthogonal to the non-corresponding two columns of $\mathbf{A}$ (causing the off-diagonal terms of $\mathbf{I}=\mathbf{A}^{-1}\mathbf{A}$ be zero). Dividing by

$\det(\mathbf{A})=\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2})$

causes the diagonal elements of $\mathbf{I}=\mathbf{A}^{-1}\mathbf{A}$ to be unity. For example, the first diagonal is:

$1 = \frac{1}{\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2})} \mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2}).$

#### Inversion of 4×4 matrices

With increasing dimension, expressions for the inverse of A get complicated. For n = 4 the Cayley-Hamilton method leads to an expression that is still tractable:

$\mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \frac{1}{6}\left( (\mathrm{tr}\mathbf{A})^{3}-3\mathrm{tr}\mathbf{A}\mathrm{tr}\mathbf{A}^{2}+2\mathrm{tr}\mathbf{A}^{3}\right) \mathbf{I} -\frac{1}{2}\mathbf{A}\left( (\mathrm{tr}\mathbf{A})^{2}-\mathrm{tr}\mathbf{A}^{2}\right) +\mathbf{A}^{2}\mathrm{tr}\mathbf{A}-\mathbf{A}^{3}\right].$

### Blockwise inversion

Matrices can also be inverted blockwise by using the following analytic inversion formula:

 $\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{bmatrix}^{-1} = \begin{bmatrix} \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & -\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \\ -(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \end{bmatrix}$ $(1)\,$

where A, B, C and D are matrix sub-blocks of arbitrary size. (A and D must be square, so that they can be inverted. Furthermore, A and DCA−1B must be nonsingular.[6]) This strategy is particularly advantageous if A is diagonal and DCA−1B (the Schur complement of A) is a small matrix, since they are the only matrices requiring inversion. This technique was reinvented several times and is due to Hans Boltz (1923),[citation needed] who used it for the inversion of geodetic matrices, and Tadeusz Banachiewicz (1937), who generalized it and proved its correctness.

The nullity theorem says that the nullity of A equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of B equals the nullity of the sub-block in the upper right of the inverse matrix.

The inversion procedure that led to Equation (1) performed matrix block operations that operated on C and D first. Instead, if A and B are operated on first, and provided D and ABD−1C are nonsingular ,[7] the result is

 $\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{bmatrix}^{-1} = \begin{bmatrix} (\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} & -(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} & \quad \mathbf{D}^{-1}+\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1}\end{bmatrix}.$ $(2)\,$

Equating Equations (1) and (2) leads to

 $(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} = \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1}\,$ $(3)\,$
$(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} = \mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\,$
$\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} = (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1}\,$
$\mathbf{D}^{-1}+\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} = (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\,$

where Equation (3) is the matrix inversion lemma, which is equivalent to the binomial inverse theorem.

Since a blockwise inversion of an n×n matrix requires inversion of two half-sized matrices and 6 multiplications between two half-sized matrices, it can be shown that a divide and conquer algorithm that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.[8] There exist matrix multiplication algorithms with a complexity of O(n2.3727) operations, while the best proven lower bound is Ω(n2 log n).[9]

### By Neumann series

If a matrix A has the property that

$\lim_{n \to \infty} (\mathbf I - \mathbf A)^n = 0$

then A is nonsingular and its inverse may be expressed by a Neumann series:[10]

$\mathbf A^{-1} = \sum_{n = 0}^\infty (\mathbf I - \mathbf A)^n.$

Truncating the sum results in an "approximate" inverse which may be useful as a preconditioner. Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum. Therefore, if one wishes to compute $2^L$ terms, one merely need the moments $A,A^2, A^4,...,A^{2^L}$ which can be found through L matrix multiplications. Then another L matrix multiplications are needed to obtain the final result by multiplying all the moments together. Therefore, 2L matrix multiplications are needed to compute $2^L$terms of the sum.

More generally, if A is "near" the invertible matrix X in the sense that

$\lim_{n \to \infty} (\mathbf I - \mathbf X^{-1} \mathbf A)^n = 0 \mathrm{~~or~~} \lim_{n \to \infty} (\mathbf I - \mathbf A \mathbf X^{-1})^n = 0$

then A is nonsingular and its inverse is

$\mathbf A^{-1} = \sum_{n = 0}^\infty \left(\mathbf X^{-1} (\mathbf X - \mathbf A)\right)^n \mathbf X^{-1}~.$

If it is also the case that A-X has rank 1 then this simplifies to

$\mathbf A^{-1} = \mathbf X^{-1} - \frac{\mathbf X^{-1} (\mathbf A - \mathbf X) \mathbf X^{-1}}{1+\operatorname{tr}(\mathbf X^{-1} (\mathbf A - \mathbf X))}~.$

## Derivative of the matrix inverse

Suppose that the invertible matrix A depends on a parameter t. Then the derivative of the inverse of A with respect to t is given by

$\frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t} = - \mathbf{A}^{-1} \frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t} \mathbf{A}^{-1}.$

To derive the above expression for the derivative of the inverse of A, one can differentiate the definition of the matrix inverse $\mathbf{A}^{-1}\mathbf{A}=\mathbf{I}$ and then solve for the inverse of A:

$\frac{\mathrm{d}\mathbf{A}^{-1}\mathbf{A}}{\mathrm{d}t} =\frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t}\mathbf{A} +\mathbf{A}^{-1}\frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t} =\frac{\mathrm{d}\mathbf{I}}{\mathrm{d}t} =\mathbf{0}.$

Subtracting $\mathbf{A}^{-1}\frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t}$ from both sides of the above and multiplying on the right by $\mathbf{A}^{-1}$ gives the correct expression for the derivative of the inverse:

$\frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t} = - \mathbf{A}^{-1} \frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t} \mathbf{A}^{-1}.$

Similarly, if $\epsilon$ is a small number then

$\left(\mathbf{A} + \epsilon\mathbf{X}\right)^{-1} = \mathbf{A}^{-1} - \epsilon \mathbf{A}^{-1} \mathbf{X} \mathbf{A}^{-1} + \mathcal{O}(\epsilon^2)\,.$

## Moore–Penrose pseudoinverse

Some of the properties of inverse matrices are shared by Moore–Penrose pseudoinverses, which can be defined for any m-by-n matrix.

## Applications

For most practical applications, it is not necessary to invert a matrix to solve a system of linear equations; however, for a unique solution, it is necessary that the matrix involved be invertible.

Decomposition techniques like LU decomposition are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed.

### Matrix inverses in real-time simulations

Matrix inversion plays a significant role in computer graphics, particularly in 3D graphics rendering and 3D simulations. Examples include screen-to-world ray casting, world-to-subspace-to-world object transformations, and physical simulations.

### Matrix inverses in MIMO wireless communication

Matrix inversion also play a significant role in the MIMO (Multiple-Input, Multiple-Output) technology in wireless communications. The MIMO system consists of N transmit and M receive antennas. Unique signals, occupying the same frequency band, are sent via N transmit antennas and are received via M receive antennas. The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming a NxM transmission matrix H. It is crucial for the matrix H to be invertible for the receiver to be able to figure out the transmitted information.

## Notes

1. ^ Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 14. ISBN 978-0-521-38632-6..
2. ^ Pan, Victor; Reif, John (1985), "Efficient Parallel Solution of Linear Systems", Proceedings of the Mth Annual ACM Symposium on Theory of Computing, Providence: ACM
3. ^ Pan, Victor; Reif, John (1985), "Harvard University Center for Research in Computing Technology Report TR-02-85", Cambridge, MA: Aiken Computation Laboratory
4. ^ "The Inversion of Large Matrices". Byte magazine 11 (04): 181–190. April 1986.
5. ^ Strang, Gilbert (2003). Introduction to linear algebra (3rd ed.). SIAM. p. 71. ISBN 0-9614088-9-8., Chapter 2, page 71
6. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 44. ISBN 0-691-11802-7.
7. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 45. ISBN 0-691-11802-7.
8. ^ T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, 3rd ed., MIT Press, Cambridge, MA, 2009, §28.2.
9. ^ Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. doi:10.1145/509907.509932.
10. ^ Stewart, Gilbert (1998). Matrix Algorithms: Basic decompositions. SIAM. p. 55. ISBN 0-89871-414-1.