Over the past decade, much research has been devoted to the understanding and application of what has come to be known as signal subspace and noise subspace processing. This methodology is based on the linear statistical model for vector data. All data vectors are linear combinations of their signal and noise components. Given such vectors of length M, a vector space CM may be spanned by any M independent, length M complex vectors. In many situations the spanning vectors may be partitioned or chosen such that r vectors are adequate to span the set of all possible signal vectors, the signal subspace. The remaining M – r vectors lie in the noise subspace. The two subspaces are orthogonal, meaning that any signal subspace vector has zero inner product with any noise subspace vector.
The data covariance matrix is used to estimate the two subspaces. When the estimation is good, for example when S/N is sufficiently high and sample size sufficiently large, then n – r dimensions of noise power can be removed effectively from the data, allowing processing to proceed with higher S/N data. This results in better parameter estimations, decisions, or interpretations.
The ability to separate signal and noise subspaces rests not only on S/N and sample size, but also on a priori knowledge of the linear statistical model. In the following, I will define the linear statistical model, explain the mathematics of subspaces, and give some examples of interest.