My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Thursday, August 9, 2012

A reply to the post "More intra-block correlation and sparse sensing matrices" in Nuit Blanche


 Last week there was a post, titled "More intra-block correlation and sparse sensing matrices", in Nuit Blanche. The post displayed Phil's email, which introduced his group's several nice papers on approximate message passing (AMP) algorithms. There were also some comments on sparse Bayesian learning (SBL). But to avoid some possible confusions to people who are not familiar to SBL, I sent an email to Igor, the manager of Nuit Blanche. And that email has been posted in Nuit Blanche yesterday.  Many thanks, Igor!

Since my blog mainly focuses on SBL, I re-posted my email below.


Dear Igor,

I read your post "More intra-block correlation and sparse sensing matrices" which presents Phil's email. After introduced his several nice papers, in the end of the email Phil said "we get performance that meets or exceeds that of Bayesian or variational techniques, but with orders-of-magnitude reduction in complexity". I believe Phil was saying that AMP algorithms meet or exceed the performance of Bayesian or variational techniques in the scenarios/applications given in their papers.

There are several scenarios where SBL has obvious advantages over other algorithms. One situation is when the sensing matrix is coherent (i.e., columns are largely correlated).  This situation is often encountered in source localization and tracking (e.g. direction-of-arrival estimation, brain source localization, radar detection), biomedical data analysis (e.g. neuroimaging data pattern recognition), and computer vision (e.g. face/expression recognition). SBL algorithms generally have much better performance than other algorithms in these applications.  David Wipf also has a NIPS 2011 paper, titled : "Sparse Estimation with Structured Dictionaries", which mathematically proved why SBL is better than Lasso in this situation (Note that the so-called dictionary matrix in this paper is indeed the sensing matrix in a standard CS experiment. I hope readers won't be confused by these names).

There is a source localization demo using a sensing matrix from real-world data, which can be downloaded at:
http://dsp.ucsd.edu/~zhilin/papers/Localization.zip. In this demo I compared three MMV algorithms developed in our lab. Anyone can use his/her favorite algorithm in this demo to see the performance.  

When signals are structured but non-sparse or less-sparse, SBL algorithms also have  good performance, as shown in my wireless telemonitoring paper, "Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring viathe Framework of Block Sparse Bayesian Learning".  Also, my BSBL paper, titled " Extension of SBL Algorithms for the Recovery of Block SparseSignals with Intra-Block Correlation", shows that in a noise-free situation, after exploiting block structure and intra-block correlation, BSBL algorithms can exactly recover sparse signals with K non-zero elements from only K measurements with high success rates (>0.99).  Thus, SBL is helpful in the applications when signals are not very sparse.

SBL algorithms are often criticized for slow speed. But I think the situation will change. One reason is that SBL algorithms can be transformed into iterative reweighted algorithms. For example, in the above BSBL paper, BSBL-L1 can be implemented by iteratively running group Lasso 3-4 times or less. Since every year there are a number of more efficient Lasso-type algorithms proposed, it is expected that BSBL-L1 can be faster and faster. Another reason is there are several methods to speed up SBL algorithms, such as bounded-optimization technique used in my BSBL-BO algorithm, and the fixed-point method, which was used in my CVPR 2012 paper, titled "Sparse Bayesian Multi-Task Learning for PredictingCognitive Outcomes from Neuroimaging Measures in Alzheimer's Disease". And we also have several much faster algorithms and will be submitted soon.

In the end, I want to say it is hard to say AMP outperforms SBL, or SBL outperforms AMP. AMP and SBL, as two benchmarks, have advantages in different applications/scenarios.   We all enjoy seeing both AMP and SBL are evolving for better.


Best,
Zhilin

1 comment:

  1. Thanks for sharing> I think it is really a knowledgeable post for the student of AI who works in the area of signal processing.


    ReplyDelete

Note: Only a member of this blog may post a comment.