The MMV model under the common sparsity assumption is given by:

Y=AX + V,

where Y is the available data matrix of size N x L, A is the known dictionary matrix, and X is the unknown solution matrix of size M x L. V is the noise matrix. Assume:

X_i = sqrt(z) U_i, i=1,...,L,

where X_i is the i-th column of the solution matrix X, z is a positive scalar, U_i is a column vector with elements being independent. Each element of U_i satisfied a Gaussian distribution N(0,\gamma_i). Then, following the steps of the derivation of MSBL, we can easily obtain the GSM based MSBL algorithm.

The algorithm has the similar form to MSBL. Essentially, their only difference is that the $\lambda$ in MSBL becomes $\lambda / z $ in GSM-MSBL. So, it seems that GSM-MSBL may correct the learning of $\lambda$. It is known that SBL's learning rules for $\lambda$ are not robust in low SNR cases. So, I expect that GSM-MSBL can yield a better recovery performance than MSBL through the correction of the learning rule for $\lambda$.

However, by observing the derived learning rule for z, i.e.

I found the learning rule is exactly equal to 1, i.e. z = 1! In this case, GSM-MSBL is the same to MSBL. Admittedly, the above learning rule is obtained from the data using the empirical Bayesian method. Maybe there exist other methods to estimate z. But currently I don't know.

**If anybody has got a GSM based MSBL algorithm showing better performance, please let me know.**

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.