Research not for publishing papers, but for fun, for satisfying curiosity, and for revealing the truth.

This blog reports latest progresses in
(1) Signal Processing and Machine Learning for Biomedicine, Neuroimaging, Wearable Healthcare, and Smart-Home
(2) Sparse Signal Recovery and Compressed Sensing of Signals by Exploiting Spatiotemporal Structures
(3) My Works


Wednesday, February 23, 2011

Answer Bob's question on the paper: Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning

Today I received an email from a reader, Bob, asking me why in high SNR cases or noiseless cases T-MSBL is better than T-SBL. What confused Bob is that in the paper T-MSBL is an approximation to T-SBL (using the approximation (20)). Both of them, in theory, are identical in the noiseless cases or the correlation-free cases. So, by intuition, T-MSBL should not be better than T-SBL, and in noiseless cases or correlation-free cases, T-MSBL has the same performance to T-SBL.

Thanks Bob for the good question.

First, I have to say, there is a slight difference between T-MSBL and T-SBL in noiseless cases or correlation-free cases. For T-SBL, the learning rule for B is given by Equation (13). For T-MSBL, the learning rule is given by Equation (28)-(29), which cannot be obtained from (13) using the approximation (20). As I stated in my paper, different B's can result in different performance. But I guess if T-MSBL adopts the rule (27), instead of (28)-(29), it should have the same performance to T-SBL in noiseless cases or correlation-free cases (because (27) is the simplified version of (13)).

No comments:

Post a Comment