My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Thursday, October 20, 2011

Noise Folding Puts Tough Requirements on Compressed Sensing Algorithms?

Tonight I read two papers on noise folding in the compressed sensing problem. They are:

[1] M.A.Davenport, J.N.Laska, J.R.Treichler, R.G.Baraniuk, The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding vs. Dynamic Range. Preprint, April, 2011


The noise folding is a silent topic in the hot compressed sensing field. The problem is described as follows:
y = A (x + n) + v                 (1)
Namely, the signal itself contains noise (called 'signal noise' or 'source noise'). v is the measurement noise. The model can be rewritten as
y = Ax + (An+v) = Ax + u.   (2)
Intuitively, the 'noise' has increased, and this will bring "troubles" to algorithms.

The above two papers rigorously analyze how the signal noise n affects the estimation quality, the RIP condition, and so on.

In [1], the authors consider the case when the matrix A is orthonormal and v is not present. They found that the noise folding has a significant impact on the amount of noise present in CS measurements; every time one doubles the subsampling factor g (i.e. the ratio of column number of A to its row number), the SNR loss increases roughly by 3dB, namely,

In [2] the authors considered a general case (i.e. A is not necessarily orthonormal and the measurement noise v is present) and showed that the model (1) is equivalent to
y_hat = Bx +z
where B is a matrix whose coherence and RIP constants are very close to those of A, and z is zero-mean white noise vector with covariance matrix (sigma^2 + g * sigma_0^2) I., where E{v} = 0, E{v v^T} = sigma^2 * I,  and E{n}=0, E{n n^T} = sigma_0^2 * I.. The result also suggests that the effect of signal noise n is to degrade the SNR by a factor of g.

Clearly, these results tell every algorithm designer (note: most algorithms essentially perform on the rewritten model (2) ):

1) To design algorithms that work well under strong noise environment, especially when the subsampling factor g is large.

2) To design algorithms that do not need any prior knowledge on the noise level. In practice we perhaps can get some knowledge on the strength of measurement noise v, but we have much less knowledge on the strength of signal noise n. Consequently, we don't know the strength of the equivalent noise u (see the rewritten model (2)). Note that many algorithms use the knowledge of the noise level (in model (2) ) to set a good value for their regularization parameter. So, this means that the regularization parameter (related to the noise level) should be automatically learned by algorithms themselves, not pre-set.

A quick example is the EEG source localization. In this application, the measurement noise level is well controlled by  EEG recording machines. However, we know little about the strength of the signal noise. As we have known,  the regularization parameter strongly affects algorithms' performance. So, an algorithm with user-defined regularization parameter may perform far from optimally.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.