My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Tuesday, October 2, 2012

Yes, let's Move From Sparsity to Structured Sparsity

Igor wrote a cool post in Nuit Blanche yesterday: Pushing the Boundaries in Compressive Sensing by proposing the following questions (quoted below):
  • Given that compressive sensing is based on an argument of sparsity 
  • Given that sparsity is all around us because most data seem to be sparse or compressible in a wavelet basis
  • Given that wavelet decomposition not only shows not just simple compressibility but structured compressibility of most datasets
  • Given that we now have some results showing better compressive sensing with a structured sparsity argument
Isn't it time we stopped talking about RIP ? the Donoho-Tanner phase transition ? L_1 recovery ?

My answer is "YES"! There are a lot of practical scenarios where signals (or the regression coefficients) have rich structure. Merely exploiting sparsity without exploiting structure is far from enough! Even in some cases where structure information is not obvious and generally traditional L1 recovery algorithms are used, we can still find a way to use structured-sparsity-based algorithms.

In the following I'll give an example on the recovery of compressed audio signals, which is a typical application. I've seen a number of work which used traditional L1 algorithms to do the job. Let's see how a block-structure-exploited algorithm can improve the result.

Below is an audio signal with the length of 81920. 
In the compression stage, it was evenly divided into T segments x_i (i=1,2,...,T). We considered five cases, i.e., T choosing 160, 80, 40, 20, and 10. Accordingly, the segments had the length of N = 512, 1024, 2048, 4096, and 8192.  Each segment, x_i, was compressed into y_i. The sensing matrix A was a random Gaussian matrix with the dimension N/2 by N.

In the recovery stage, we first recovered the DCT coefficients of each segment, and then recovered the original segment. This is a tradition method used by most L1 algorithms for this task, since audio signals are believed to be more sparse than in the time domain.

We used six algorithms which do not consider any structure information. They were: Smooth L0 (SL0), EM-BG-AMP, SPGL-1, BCS, OMP, and SP. Their recovery performance was measured by total speed and the normalized MSE (calculated in dB). The results are given in the following table:

Table: Performance of all algorithms measured in terms of normalized MSE in dB (and speed in second).
From the Table, we can see if we want to obtain good quality, we need to increase the segment length. However, the cost is that the recovery time is significantly increased. For example, to achieve the quality of MSE = -21dB, SL0 needed to recover segments of the length 8192, and the total time was 2725 seconds!

Now let's perform the BSBL-BO algorithm, a sparse Bayesian learning algorithm exploiting block structure and intra-block correlation,  to do the same task. As I have said in many places, BSBL-BO needs users to define a block partition, and the block partition is not needed to be consistent with the true block structure of signals. So, we defined the block partition as (in Matlab language): [1:16:N]. The complete command for recovering the i-th segment was:
Result = BSBL_BO(A,  y_i, [1:16:N], 0, 'prune_gamma',-1, 'max_iters',15);

The recovery result for the case N=512 is given in the above Table. Clearly, by exploiting the structure information, BSBL-BO achieved -23.3 dB but only cost 82 seconds! The quality was 2 dB higher than that of SL0 but cost only 3% of the time cost by SL0.

What this result tells us? 

First, exploiting both structure and sparsity is very important; merely exploiting sparsity is outdated in many practical applications.

Second, conclusions on which algorithms are fast or slow are weakened if not mentioning which practical task is finished by these algorithms. Based on specific tasks, the answer could be varied case by case. For example, in the above experiment SL0 was very faster than BSBL-BO given the same segment length N. However, if the goal is to achieve a good quality, it took much longer time than BSBL-BO (even could not achieve the same quality as BSBL-BO, no matter how large the N was).

Note, I was not trying to compare BSBL-BO with the other algorithms. The comparison was not meaningful, since BSBL-BO exploits structure while the other algorithms do not. The point here is, even for some traditional applications where L1 algorithms were often used, exploiting structure information can provide you a better result!

PS: Recently we have an invited short review paper on SBL algorithms exploiting intra- and inter-vector correlation,
Bhaskar D. Rao, Zhilin Zhang, Yuzhe Jin, Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector Correlation, SPCOM 2012
which can be accessed at: http://arxiv.org/pdf/1205.4471v1

Besides, several months ago we have another review paper on SBL algorithms exploiting structure, titled "From Sparsity to Structured Sparsity: Bayesian Perspective", in the Chinese journal Signal Processing. Those who can read Chinese can access the paper from here.






5 comments:

  1. Thanks Zhilin. Great post.

    Cheers,

    Igor.

    ReplyDelete
  2. maybe you can mention the chinese paper we published together!

    ReplyDelete
  3. Lei, I have added. Maybe we can write another review paper in English in the future :)

    ReplyDelete

Note: Only a member of this blog may post a comment.