My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Wednesday, December 12, 2012

I successfully defended my Ph.D. dissertation today

I have two good reasons to remember today. One is that I successfully defended my Ph.D. dissertation today. The second is that today is 12/12/12 -- you won't have a day with this triple pattern again in this century :)


 (The EBU-1 building of UCSD; the little house on the top is the `falling star')



Wednesday, December 5, 2012

Welcome to attend my dissertation defense on Dec.12

Finally, my dissertation defense is scheduled at 9:15am - 11:15am on Dec.12 (Wednesday) in EBU1 4309.

Welcome to attend!

Below is the title and the abstract of my presentation.

Sparse Signal Recovery Exploiting Spatiotemporal Correlation


Sparse signal recovery algorithms have significant impact on many fields, including signal and image processing, information theory, statistics, data sampling and compression, and neuroimaging. The core of sparse signal recovery algorithms is to find a solution to an underdetermined inverse system of equations, where the solution is expected to be sparse or approximately sparse. Motivated by practical problems, numerous algorithms have been proposed. However, most algorithms ignore the correlation among nonzero entries of a solution, which is often encountered in a practical problem. Thus, it is unclear how this correlation affects an algorithm's performance and whether the correlation is harmful or beneficial.

This work aims to design algorithms which can exploit a variety of correlation structures in solutions and reveal the impact of these correlation structures on algorithms' recovery performance.

To achieve this, a block sparse Bayesian learning (BSBL) framework is proposed. Based on this framework, a number of sparse Bayesian learning (SBL) algorithms are derived to exploit intra-block correlation in a canonical block sparse model, temporal correlation in a canonical multiple measurement vector model, spatiotemporal correlation in a spatiotemporal sparse model, and local temporal correlation in a canonical time-varying sparse model. Several optimization approaches are employed in the algorithm development, including the expectation-maximization method, the bound-optimization method, and the fixed-point method. Experimental results show that these algorithms significantly outperform existing algorithms.

With these algorithms, we find that different correlation structures affect the quality of estimated solutions to different degrees. However, if these correlation structures are present and exploited, algorithms' performance can be largely improved. Inspired by this, we connect these algorithms to Group-Lasso type algorithms and iterative reweighted $\ell_1$ and $\ell_2$ algorithms, and suggest strategies to modify them to exploit the correlation structures for better performance.

The derived SBL algorithms have been used with considerable success in various challenging applications such as wireless telemonitoring of raw physiological signals and prediction of cognition levels of patients from their neuroimaging measures. In the former application, the derived SBL algorithms are the only algorithms so far that achieve satisfactory results. This is because raw physiological signals are neither sparse in the time domain nor sparse in any transformed domains, while the derived SBL algorithms can maintain robust performance for these signals. In the latter application, the derived SBL algorithms achieved the highest prediction accuracy on common datasets, compared to published results. This is because the BSBL framework provides flexibility to exploit both correlation structures and nonlinear relationship between response variables and predictor variables in regression models.







Sunday, November 25, 2012

An new BSBL algorithm has been derived

A fast BSBL algorithm has been derived, and the work has been submitted to IEEE Signal Processing Letters.

Below is the paper:

Fast Marginalized Block SBL Algorithm
by Benyuan Liu, Zhilin Zhang, Hongqi Fan, Zaiqi Lu, Qiang Fu

The preprint can be downloaded at: http://arxiv.org/abs/1211.4909

Here is the abstract:
The performance of sparse signal recovery can be improved if both sparsity and correlation structure of signals can be exploited. One typical correlation structure is intra-block correlation in block sparse signals. To exploit this structure, a framework, called block sparse Bayesian learning (BSBL) framework, has been proposed recently. Algorithms derived from this framework showed promising performance but their speed is not very fast, which limits their applications. This work derives an efficient algorithm from this framework, using a  marginalized likelihood maximization method. Thus it can exploit block sparsity and intra-block correlation of signals. Compared to existing BSBL algorithms, it has close recovery performance to them, but has much faster speed. Therefore, it is more suitable for recovering large scale datasets.


Scientists See Promise in Deep-Learning Programs

The New York Times just has a report on  deep-learning. Some of the applications mentioned in the report  really refreshed my knowledge on it. Enjoy!

http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html?_r=0



Friday, November 16, 2012

Block Sparse Bayesian Learning (BSBL) has been accepted by IEEE Trans. on Signal Processing

Our work on Block Sparse Bayesian Learning (BSBL) has been accepted by IEEE Trans. on Signal Processing last week.

Here is the paper information:

Zhilin Zhang, Bhaskar. D. Rao, Extension of SBL Algorithms for theRecovery of Block Sparse Signals with Intra-Block Correlation, to appear in IEEE Trans. on Signal Processing

The preprint can be downloaded at: http://arxiv.org/abs/1201.0862
The codes can be downloaded at: https://sites.google.com/site/researchbyzhang/bsbl

Here is the abstract:

We examine the recovery of block sparse signals and extend the framework in two important directions; one by exploiting signals' intra-block correlation and the other by generalizing signals' block structure. We propose two families of algorithms based on the framework of block sparse Bayesian learning (BSBL). One family, directly derived from the BSBL framework, requires to know the block structure. Another family, derived from an expanded BSBL framework, is based on a weaker assumption on the block structure, and can be used in the case when the block structure is completely unknown. Using these algorithms we show that exploiting intra-block correlation is very helpful in improving recovery performance. These algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation to improve performance.

The following are related application work:

Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Non-Invasive Fetal ECG via Block Sparse Bayesian Learning, IEEE Trans. Biomedical Engineering, 2012, accepted

Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware, IEEE Trans. Biomedical Engineering, vol.59, no.12, 2012 



 

Tuesday, October 23, 2012

Our work on compressed sensing of fetal ECG has been accepted by IEEE T-BME

Our work on compressed sensing of fetal ECG has been accepted by IEEE Trans. Biomedical Engineering. The details are as follows:

Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Non-Invasive Fetal ECG via Block Sparse Bayesian Learning,  
by Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, accepted by IEEE Trans. Biomedical Engineering.   
Available at:  http://arxiv.org/abs/1205.1287, Codes can be downloaded at: http://dsp.ucsd.edu/~zhilin/BSBL.html, or  https://sites.google.com/site/researchbyzhang/bsbl


Note that there are two groups of works on compressed sensing of ECG. One is the ECG compression (just like video compression, image compression, etc). Most works actually belong to this group. They generally use some MIT-BIH datasets, which are very clean (noise is removed).

Another group is compressed sensing of ECG for energy-efficient wireless telemonitoring. There are only few works in this group. Our work belongs to this group. In this group the ECG data is always contaminated by noise and artifacts ('signal noise'). This is because the goal of telemonitoring is to allow people to walk and even exercise freely, and thus strong noise and artifacts caused by muscle and electrode movement are inevitable. Furthermore, artifacts caused by battery power level also cannot be ignored. Consequently, the raw ECG recordings are not sparse in the time domain and also not sparse in the transformed domains (e.g. the wavelet domain, the DCT domain). However, the strict constraint on energy consumption (and design issues, etc) of telemonitoring systems does not encourage filtering or other preprocessing before compression. Or, put in another way, if energy consumption and design issues are not problems, CS may have no advantages over traditional methods. Thus, CS algorithms have to recover non-sparse signals for this application. It turns out that the problem is very challenging.

Our work not only solves this challenging problem, but also has some interesting mathematical meanings:

By linear algebra, there are infinite solutions to the underdetermined problem y=Ax. When the true solution x0 is sparse, using CS algorithms it is possible to find it. But when the true solution x0 is non-sparse, finding it is more challenging and new constraints/assumptions are called for. This work shows that when exploiting the unknown block structure and the intra-block correlation of x0, it is possible to find a solution x_est which is very close to the true solution x0. These findings raise new and interesting possibilities for signal compression as well as theoretical questions in the subject of sparse and non-sparse signal recovery from a small number of measurements y.


Below is the paper's Abstract:
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body-area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as non-sparsity and strong noise contamination, current CS algorithms generally fail in this application.

This work proposes to use the block sparse Bayesian learning (BSBL) framework to compress/reconstruct non-sparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.



PS: When I wrote this paper, I was in my wife's delivery room. My wife was lying on the bed, resting and waiting for her midwife. I was sitting beside her, writing the paper in my laptop (and trying to finish the main framework before the miracle moment). Now the baby is 8-month.





Tuesday, October 2, 2012

Yes, let's Move From Sparsity to Structured Sparsity

Igor wrote a cool post in Nuit Blanche yesterday: Pushing the Boundaries in Compressive Sensing by proposing the following questions (quoted below):
  • Given that compressive sensing is based on an argument of sparsity 
  • Given that sparsity is all around us because most data seem to be sparse or compressible in a wavelet basis
  • Given that wavelet decomposition not only shows not just simple compressibility but structured compressibility of most datasets
  • Given that we now have some results showing better compressive sensing with a structured sparsity argument
Isn't it time we stopped talking about RIP ? the Donoho-Tanner phase transition ? L_1 recovery ?

My answer is "YES"! There are a lot of practical scenarios where signals (or the regression coefficients) have rich structure. Merely exploiting sparsity without exploiting structure is far from enough! Even in some cases where structure information is not obvious and generally traditional L1 recovery algorithms are used, we can still find a way to use structured-sparsity-based algorithms.

In the following I'll give an example on the recovery of compressed audio signals, which is a typical application. I've seen a number of work which used traditional L1 algorithms to do the job. Let's see how a block-structure-exploited algorithm can improve the result.

Below is an audio signal with the length of 81920. 
In the compression stage, it was evenly divided into T segments x_i (i=1,2,...,T). We considered five cases, i.e., T choosing 160, 80, 40, 20, and 10. Accordingly, the segments had the length of N = 512, 1024, 2048, 4096, and 8192.  Each segment, x_i, was compressed into y_i. The sensing matrix A was a random Gaussian matrix with the dimension N/2 by N.

In the recovery stage, we first recovered the DCT coefficients of each segment, and then recovered the original segment. This is a tradition method used by most L1 algorithms for this task, since audio signals are believed to be more sparse than in the time domain.

We used six algorithms which do not consider any structure information. They were: Smooth L0 (SL0), EM-BG-AMP, SPGL-1, BCS, OMP, and SP. Their recovery performance was measured by total speed and the normalized MSE (calculated in dB). The results are given in the following table:

Table: Performance of all algorithms measured in terms of normalized MSE in dB (and speed in second).
From the Table, we can see if we want to obtain good quality, we need to increase the segment length. However, the cost is that the recovery time is significantly increased. For example, to achieve the quality of MSE = -21dB, SL0 needed to recover segments of the length 8192, and the total time was 2725 seconds!

Now let's perform the BSBL-BO algorithm, a sparse Bayesian learning algorithm exploiting block structure and intra-block correlation,  to do the same task. As I have said in many places, BSBL-BO needs users to define a block partition, and the block partition is not needed to be consistent with the true block structure of signals. So, we defined the block partition as (in Matlab language): [1:16:N]. The complete command for recovering the i-th segment was:
Result = BSBL_BO(A,  y_i, [1:16:N], 0, 'prune_gamma',-1, 'max_iters',15);

The recovery result for the case N=512 is given in the above Table. Clearly, by exploiting the structure information, BSBL-BO achieved -23.3 dB but only cost 82 seconds! The quality was 2 dB higher than that of SL0 but cost only 3% of the time cost by SL0.

What this result tells us? 

First, exploiting both structure and sparsity is very important; merely exploiting sparsity is outdated in many practical applications.

Second, conclusions on which algorithms are fast or slow are weakened if not mentioning which practical task is finished by these algorithms. Based on specific tasks, the answer could be varied case by case. For example, in the above experiment SL0 was very faster than BSBL-BO given the same segment length N. However, if the goal is to achieve a good quality, it took much longer time than BSBL-BO (even could not achieve the same quality as BSBL-BO, no matter how large the N was).

Note, I was not trying to compare BSBL-BO with the other algorithms. The comparison was not meaningful, since BSBL-BO exploits structure while the other algorithms do not. The point here is, even for some traditional applications where L1 algorithms were often used, exploiting structure information can provide you a better result!

PS: Recently we have an invited short review paper on SBL algorithms exploiting intra- and inter-vector correlation,
Bhaskar D. Rao, Zhilin Zhang, Yuzhe Jin, Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector Correlation, SPCOM 2012
which can be accessed at: http://arxiv.org/pdf/1205.4471v1

Besides, several months ago we have another review paper on SBL algorithms exploiting structure, titled "From Sparsity to Structured Sparsity: Bayesian Perspective", in the Chinese journal Signal Processing. Those who can read Chinese can access the paper from here.






Sunday, September 23, 2012

Our paper on compressed sensing of EEG has been accepted

Our paper on compressed sensing of EEG for wireless telemonitoring has been accepted by IEEE Trans. on Biomedical Engineering.

Here is the summary of this paper:
(1) EEG is not sparse in the time domain and not in transformed domains (e.g. the DCT domain, the wavelet domain).

(2) The BSBL framework is used to recover the non-sparse signals.

(3) The recovery quality is confirmed by independent component analysis decomposition, which is a regular processing procedure in EEG analysis.

(4) Neutral tone on the comparison of compressed sensing vs. wavelet compression is made.

This paper is our another paper on non-sparse signal recovery. It is also the first step of our big project on brain-computer interface (BCI). Although it is just accepted by the journal, the knowledge has been outdated for our lab, since I have a number of more powerful algorithm, which will be submitted soon.

Anyway, here is the paper:

Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardwareto appear in IEEE Trans. on Biomedical Engineering

Abstract:
Telemonitoring of electroencephalogram (EEG) through wireless body-area networks is an evolving direction in personalized medicine. Among various constraints in designing such a system, three important constraints are energy consumption, data compression, and device cost. Conventional data compression methodologies, although effective in data compression, consumes significant energy and cannot reduce device cost. Compressed sensing (CS), as an emerging data compression methodology, is promising in catering to these constraints. However, EEG is non-sparse in the time domain and also non-sparse in transformed domains (such as the wavelet domain). Therefore, it is extremely difficult for current CS algorithms to recover EEG with the quality that satisfies the requirements of clinical diagnosis and engineering applications. Recently, Block Sparse Bayesian Learning (BSBL) was proposed as a new method to the CS problem. This study introduces the technique to the telemonitoring of EEG. Experimental results show that its recovery quality is better than state-of-the-art CS algorithms, and sufficient for practical use. These results suggest that BSBL is very promising for telemonitoring of EEG and other non-sparse physiological signals.






Thursday, August 9, 2012

A reply to the post "More intra-block correlation and sparse sensing matrices" in Nuit Blanche


 Last week there was a post, titled "More intra-block correlation and sparse sensing matrices", in Nuit Blanche. The post displayed Phil's email, which introduced his group's several nice papers on approximate message passing (AMP) algorithms. There were also some comments on sparse Bayesian learning (SBL). But to avoid some possible confusions to people who are not familiar to SBL, I sent an email to Igor, the manager of Nuit Blanche. And that email has been posted in Nuit Blanche yesterday.  Many thanks, Igor!

Since my blog mainly focuses on SBL, I re-posted my email below.


Dear Igor,

I read your post "More intra-block correlation and sparse sensing matrices" which presents Phil's email. After introduced his several nice papers, in the end of the email Phil said "we get performance that meets or exceeds that of Bayesian or variational techniques, but with orders-of-magnitude reduction in complexity". I believe Phil was saying that AMP algorithms meet or exceed the performance of Bayesian or variational techniques in the scenarios/applications given in their papers.

There are several scenarios where SBL has obvious advantages over other algorithms. One situation is when the sensing matrix is coherent (i.e., columns are largely correlated).  This situation is often encountered in source localization and tracking (e.g. direction-of-arrival estimation, brain source localization, radar detection), biomedical data analysis (e.g. neuroimaging data pattern recognition), and computer vision (e.g. face/expression recognition). SBL algorithms generally have much better performance than other algorithms in these applications.  David Wipf also has a NIPS 2011 paper, titled : "Sparse Estimation with Structured Dictionaries", which mathematically proved why SBL is better than Lasso in this situation (Note that the so-called dictionary matrix in this paper is indeed the sensing matrix in a standard CS experiment. I hope readers won't be confused by these names).

There is a source localization demo using a sensing matrix from real-world data, which can be downloaded at:
http://dsp.ucsd.edu/~zhilin/papers/Localization.zip. In this demo I compared three MMV algorithms developed in our lab. Anyone can use his/her favorite algorithm in this demo to see the performance.  

When signals are structured but non-sparse or less-sparse, SBL algorithms also have  good performance, as shown in my wireless telemonitoring paper, "Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring viathe Framework of Block Sparse Bayesian Learning".  Also, my BSBL paper, titled " Extension of SBL Algorithms for the Recovery of Block SparseSignals with Intra-Block Correlation", shows that in a noise-free situation, after exploiting block structure and intra-block correlation, BSBL algorithms can exactly recover sparse signals with K non-zero elements from only K measurements with high success rates (>0.99).  Thus, SBL is helpful in the applications when signals are not very sparse.

SBL algorithms are often criticized for slow speed. But I think the situation will change. One reason is that SBL algorithms can be transformed into iterative reweighted algorithms. For example, in the above BSBL paper, BSBL-L1 can be implemented by iteratively running group Lasso 3-4 times or less. Since every year there are a number of more efficient Lasso-type algorithms proposed, it is expected that BSBL-L1 can be faster and faster. Another reason is there are several methods to speed up SBL algorithms, such as bounded-optimization technique used in my BSBL-BO algorithm, and the fixed-point method, which was used in my CVPR 2012 paper, titled "Sparse Bayesian Multi-Task Learning for PredictingCognitive Outcomes from Neuroimaging Measures in Alzheimer's Disease". And we also have several much faster algorithms and will be submitted soon.

In the end, I want to say it is hard to say AMP outperforms SBL, or SBL outperforms AMP. AMP and SBL, as two benchmarks, have advantages in different applications/scenarios.   We all enjoy seeing both AMP and SBL are evolving for better.


Best,
Zhilin

Thursday, May 31, 2012

More About the BSBL Codes and Demo Files for Telemonitoring of Physiological Signals

I got several emails asking some questions on the BSBL codes, especially the demo files for telemonitoring of fetal ECG in the package. I listed these questions and provided my answers:

1. Why provided the sensing matrix Phi in the software package?
     The Phi.mat in the software package stores a binary sparse matrix, which is used as the sensing matrix. But I have to emphasize that it is not an optimal sensing matrix. I just randomly generated it and then stored it and used it for all my experiments. This can help people exactly reproduce my results (When the dataset is given, different sensing matrices can result in slightly different performance).  As you know, designing an optimal sparse sensing matrix is an important direction in compressed sensing.

2. Can the demo files for telemonitoring of fetal ECG be directly used in a telemonitoring system?
     The demo files for telemonitoring of fetal ECG are just a mimicking of telemonitoring. In a real system, each channel recording should be processed by a BSBL-BO algorithm (parallel computation), and may need to use online ICA algorithms or at least on-line batch algorithms. The ICA algorithm used in the demo files is an off-line algorithm. You can use the (Extended-) Infomax algorithm, an online algorithm which can be accessed from the EEGLab.

3. Have you made  any real telemonitoring systems using the algorithm?
    The second author (Prof. Tzyy-Ping Jung) has led a team to build real-time telemonitoring systems for EEG, and some products have being sold in Taiwan.
    But these systems have not used compressed sensing technique. However, we are now considering to incorporate some BSBL algorithm. For your interests, please check the following links to see the videos on these telemonitoring systems for various applications:
      http://sccn.ucsd.edu/~jung/Site/Demos.html
     
4. Can BSBL-BO work well when used in the in-direct recovery way (i.e. first recovering the representation coefficients of the signal in some transformed domain, and then recovering the original signal)?
    Yes, of course. In the DEMO_nonSparse.m I also give an example when BSBL-BO first recovers the DCT coefficients and then recovers the original fetal ECG. The performance is improved in this way. But such method does not work well for all physiological signals. In my experiments on some public EEG and EMG datasets, BSBL-BO using this method may have poorer performance than not using this method. 

5. Speed Improvement
     I am very glad that some people are interested in the BSBL-BO and want to try in their work. Please note the code is not optimized for speed. I chose a coding style for readability instead of for speed. There are some ways to improve the code:
     (1) Remove many "if-" in the code. The current code considers many simulation scenarios, such as block sparsity model with known partition (identical block size or random block size), with unknown partition, and recovery of non-sparse structured  signals, etc. According to your problem, you can remove those unnecessary parts.
     (2) There is a matrix inverse in each iteration. You can replace the current method by other fast matrix inverse method (check matrix computation textbooks), or consider the fast strategy used in Tipping and Faul (2003).
     (3) The updating of the intra-block correlation value in each iteration is not necessary. In some cases, you can figure out a good value according to a priori information. Also, the updating of noise variance in each iteration is not necessary.
     (4) Although I have said that BSBL-L1 is not suitable for telemonitoring of non-sparse physiological signals in the paper, later I found when replacing the group-Lasso type algorithms in the BSBL-L1 with some specific algorithms, we can get good performance with dramatically accelerated speed.
    Of course, as the author of the algorithm, I am responsible for making the code more efficient. When I have time, I will improve it along these ways. But in the near future I have no time to further improve it (Currently I am now deriving SBL algorithms to solve multi-modal/multi-dataset/longitudinal problems in neuroimaging). If you have improved it, or if you want to improve it, please let me know. I would like to share my ideas with you, and collaboration is welcome.






Wednesday, May 30, 2012

Codes of the BSBL Family and Demos for Telemonitoring of ECG have been updated

I have updated the codes of the BSBL family (including BSBL-EM, BSBL-BO, and EBSBL-BO). Their speed is twice of the old versions. Further, they are not sensitive to the scaling of the practical datasets, compared to the old versions.

The download link is here: http://dsp.ucsd.edu/~zhilin/BSBL_public.zip (Sometimes the link does not work. But you can download it from the bottom of the page: https://sites.google.com/site/researchbyzhang/bsbl)

Furthermore, I added a fold, which contains the demo files to perform the experiment in my post ("http://marchonscience.blogspot.com/2012/05/is-compressed-sensing-really-useful-for_5843.html", including using BSBL-BO to recover the raw recordings, and then using FastICA to do ICA decomposition. If you think there are other algorithms can do the work as BSBL-BO, you can use the demo files to try your favorite algorithms. And I will be very glad if you send your results to me.

I will release the code of BSBL-L1 soon.

Since tomorrow, I will modify the T-MSBL group. I am very sorry that the old version of T-MSBL is slow. There is due to some reasons. I have modified the code, and the speed is at least twice of the old one. In addition, I will release other variants of T-MSBL soon.





Is compressed sensing really useful for wireless telemonitoring of non-sparse physiological signals (Part 3)?

In the last post, the signal to recover is much easy, because the two fetal QRS complexes are relatively strong. Now I show you another example, in which the fetal QRS complexes are almost invisible.

Below are 8-channel abdominal recordings (download link can be found in [1]). The peaks you see are the maternal ECG's QRS complexes, which we are not interested in. What we are interested in are the QRS complexes of fetal ECG, which peak around 6, 217, 433, 643, 854, 1065, 1278, 1488, 1697, 1912, 2110, 2317 sampling points. However, you almost can't observe such peaks in each channel recording, since the fetal ECG is very weak and is buried in noise. So, any sparsifying processing can completely remove the fetal ECG QRS complexes.
The 8 recordings are compressed using the same binary sparse matrix A in my last post, and recovered by BSBL-BO using the same input parameters (using the direct recovery way). Results are shown below:
Apparently, the whole 8-channel recordings are recovered well. But note that the recovered recordings will be processed by independent component analysis (ICA) to extract a clean fetal ECG. Therefore, if there is any small distortion in the recovered recordings (the distortion may be not discernible but could destroy the mutual structure across the channel recordings), the ICA decomposition of the recovered recordings will be different from the ICA decomposition of the original recordings. To ensure the ICA decomposition is unchanged is crucial to practical clinical diagnosis.

Hence, we perform ICA decomposition on the recovered recordings using FastICA, and for comparison, we also perform the same ICA decomposition on the original recordings. The ICA decomposition of the original recordings is given below:

The ICA decomposition of the recovered recordings is given below:

Clearly, there is no significant difference between the two ICA decompositions. More importantly, the separated fetal ECG and maternal ECG from the recovered recordings are the same as those from the original recordings. For you convenience, I zoom in some of the separated components. Below are the separated maternal ECG (a)(b) and fetal ECG (c) from the original recordings:
And below are the separated maternal ECG (a)(b) and fetal ECG (c) from the recovered recordings:

Clearly, the fetal ECG separated from the recovered recordings is almost the same as the one separated from the original recordings.

Note that, in this task the BSBL-BO recovered the 8- recordings using the direct recovery method (without first recovering the representation coefficients and then reconstructing the original recordings) as stated in my last post. The results show the BSBL-BO is very powerful to recover non-sparse structured signals.

As I said early, the in-direct recovery method, i.e. first recovering the representation coefficients and then reconstructing the original recordings, is not effective in every situation, because for the applications considered here, small representation coefficients (in transformed domain) are very important and should be recovered. In [1] I have compared several well-known algorithms, which used the in-direct method. They all recovered the significant representation coefficients, and thus the maternal ECG could be obtained from the ICA decompostion. However, the ICA decompositions of their recovered recordings were still largely different from the genuine ICA decomposition, and no fetal ECG could be obtained. This is because the representation coefficients in the transformed domain are still not sparse enough, and these algorithms could not recover the majority of small representation coefficients. Interested readers can refer to Fig.13 in [1].

PS: Using the in-direct recovery method, BSBL-BO can get better performance.


We have other results when using BSBL-BO to recover EEG for brain-computer interface. We are now revising the paper. Once we have done, I will post the paper in my homepage.


[1] Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning, submitted to IEEE Trans. on Biomedical Engineering





Is compressed sensing really useful for wireless telemonitoring of non-sparse physiological signals (Part 2)?

In my last post, I have discussed the difficulties when using compressed sensing in wireless telemonitoring of non-sparse physiological signals. Essentially, these difficulties come from the requirements of practical use: the recovered signals  are generally further processed by other signal processing algorithms (e.g. independent component analysis) and/or pattern recognition algorithms (e.g. classification); some small characteristics in signals, although not salient, are very important for diagnosis (e.g. the QRS complexes of fetal ECG in raw abdominal recordings, the P-wave and T-wave in abnormal ECG recordings, the spike duration in EMG, and the fluctuation in some EEG signals). In the following, I will give you examples and show the results of the BSBL-BO algorithm.

The first example is the recovery of raw fetal ECG recordings (recorded from the mother's abdomen).

First, I use the fetal ECG recording in my last post to show the use of BSBL-BO. The recording contains one large peak (the QRS complex of the mother's ECG, which is not of interest) and two small peaks (the QRS complexes of the fetus' ECG, which is of interest), and large noise fluctuation. The recording can be viewed as a signal with block structure (the three peaks are blocks) but contaminated by strong noise (it's 'source noise', not the 'sensor noise' as in standard noisy compressed sensing model). Denote the recording by x. And it is compressed to y by: y = Ax, where A is a 125 x 250 binary sparse matrix (see [1] for details). .

Now I use BSBL-BO to recover x from y using A. Although BSBL-BO requires to know the block partition, we have pointed out that the block partition is in fact a regularization helping learn the covariance matrix of the signal in high dimensional space, and experiments also support that the user-defined block partition does not need to be consistent with the true block partition of the signal. Therefore, I randomly set the block partition like this: [1, 26, 51, 76, 101, ...], where each number indicates the starting location of each block.

If you are familiar with SBL algorithms, you must know SBL algorithms generally have a pruning-mechanism, namely they use a threshold to prune out some irrelevant coefficients. BSBL-BO also has such pruning-mechanism. But note that in this example, the recording x is a non-sparse signal; its every entry is non-zero. Therefore, I need to turn off the pruning-mechanism. This can be done by setting the input argument 'prune_gamma' to any non-positive constant (e.g. -1, 0).

Clearly, there is strong intra-block correlation in x. So we need to explore and exploit the intra-block correlation for better performance. This can be done by setting the input argument 'learntype' to 1 in the BSBL-BO code.

Then, running the algorithm, I get the results as shown below:
The top picture shows the original recording x, the second picture shows the result when exploiting the intra-block correlation, and the last one shows the result when ignoring the intra-block correlation. Clearly, when exploiting the intra-block correlation, the original signal can be recovered with high quality; the fetus' QRS complexes are recovered well.

I encourage you to repeat the above experiment using the demo code 'DEMO_nonSparse' in the package: dsp.ucsd.edu/~zhilin/BSBL_public.zip(Sometimes the link does not work. But you can download it from the bottom of the page: https://sites.google.com/site/researchbyzhang/bsbl)

I should emphasize: in the task BSBL-BO is directly recovering a non-sparse signal, namely, using y and A to directly recover the non-sparse x from the underdetermined inverse problem: y=Ax. I didn't see any other algorithms can successfully do this (In [1] I compared many representative algorithms).

Of course, we can assume that x is represented in other domain (i.e x = Bz), and first recover the representation coefficients z by solving the inverse problem y=(AB) z, and then recover the original signal x  by x = Bz. Using this method, BSBL-BO can get better result (using the same input parameters). And I believe, using this method, some existing algorithms may recover x as well. But note that, when other algorithms adopt this method to recover a non-sparse signal, they heavily rely on how sparse the representation coefficients z are. In fact, for more practical scenarios in telemonitoring, z is not sparse enough. In the next post, I will show you another example (should surprise you!).



[1] Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning, submitted to IEEE Trans. on Biomedical Engineering

Monday, May 28, 2012

Is compressed sensing really useful for wireless telemonitoring of non-sparse physiological signals (Part 1)?


Applying compressed sensing to wireless telemonitoring of physiological signals is not new. There have been a number of work published in recent years. However, the achievement is very limited.

This is because the following reasons:

1. Most published work were only successful on few types of physiological signals, such as adult ECG. This is because these signals are relatively sparse (or become sparse after some sparsifying processing). However, most physiological signals are non-sparse, such as fetal ECG, EEG, EMG, etc.

2. Current published work required that signals contain less noise and interference (otherwise the noisy signals are not sparse). However, the main goal of wireless telemonitoring is to allow people can walk freely at home or office. Thus artifacts/interference from muscle movement is unavoidable.

3. Most published work required some sparsifying pre-processing, namely setting the entries of signals with small values to zero. This sparsifying pre-processing requires some complicated techniques to set a threshold (used to set the entries to zero). But the sparsifying pre-processing has three crucial problems:
   (1) These techniques change the underlying structure in signals. For fetal ECG, which is very weak and even invisible in the recorded signals, this technique can completely remove the fetal ECG's QRS complexes.
   (2) These techniques need to occupy extra on-chip computation and thus consume extra energy, while energy consumption is a crucial problem in wireless telemonitoring.
   (3) The setting of thresholds may have problems in practical scenarios, such as when people are walking or in some abnormal situations (remember the target market of telemonitoring is for patients).

4. More importantly, practical wireless telemonitoring systems generally collect/send multi-channel signals. For example, in telemonitoring of EEG for BCI, the channel number generally ranges from 2 to 8; in telemonitoring of fetal ECG, the channel number generally ranges from 4 to 12. These multi-channel signals are sent to remote terminals and will be further processed by other advanced signal processing techniques, such as independent component analysis (ICA). These multi-channel signal processing techniques require that the underlying structure across multi-channel signals is intact. For example, ICA requires the underlying ICA mixing structure in collected multi-channel signals is intact. Therefore, a practical telemonitoring system requires that the raw physiological signals can be completely recovered even including the noise/interference contained in the recorded signals. In other words, this requires the non-sparse signals can be completely recovered, including its entries with small amplitudes

Clearly, we can see, if a compressed sensing algorithm can be widely used in wireless telemonitoring, it must has the ability to recover non-sparse physiological signals, which completely contradicts the sparsity assumption in all the compressed sensing algorithms.

One may ask, there is another way for compressed sensing of non-sparse signals: suppose a non-sparse signal can be represented in a transform domain (e.g. wavelet domains, DCT domains, etc), such that the representation coefficients are sparse, i.e.,
x = Bz,
where x is the signal, B is the basis of the transform domain, and z is the representation coefficients. Then, the signal x is compressed by,
y = Ax,
where y is the compressed signal, and A is the sensing matrix. In the remote terminal, the compressed data y and the matrix product AB are used to recover the sparse coefficients z  according to the relation: y = (AB) z. And then the original signal x is recovered by the relation x = Bz.

However, the success of this method relies on the sparsity of the representation coefficients z. Unfortunately, for most physiological signals, z is not sparse enough, and completely recovering z is still a challenge for most compressed sensing algorithms (especially the sensing matrix A is a binary sparse matrix, a requirement of telemonitoring with low energy consumption).

Here is an example. We transform a segment of fetal ECG signal into the DCT domain, and see the representation coefficients:


As we can see, the coefficients are still not sparse. There are many coefficients with small values. For any compressed sensing algorithms, it is not a problem to recover the big coefficients, but is a big problem to recover the small coefficients. Remember, recovering the small coefficients is very important to maintain the underlying structure of signals, especially for later multi-channel signal processing. For this example, recovering the small coefficients is important to recover the fetal ECG's QRS complexes (i.e. the peaks at 65 and 180 in figure(a)) and to maintain the underlying structure for later ICA decomposition (the goal is to use ICA to extract the weak fetal ECG, not just to recover the noisy recording).

In the next post, I will introduce my work:

Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning, submitted to IEEE Trans. on Biomedical Engineering

and I will show you how to use my BSBL-BO algorithm to break through the bottleneck in wireless telemonitoring using compressed sensing.









Sunday, May 27, 2012

New Papers on Sparse Bayesian Learning (SBL)

I was super busy these days, and I am sorry that I have not updated my blog for a long time. Finally, I can find some time to write some things.


First, I've uploaded our CVPR 2012 paper on my website. I will upload the code tomorrow.

Jing Wan*, Zhilin Zhang*, Jingwen Yan, Taiyong Li, Bhaskar D. Rao, Shiaofen Fang, Sungeun Kim, Shannon Risacher, Andrew Saykin, Li Shen, Sparse Bayesian Multi-Task Learning for Predicting Cognitive Outcomes from Neuroimaging Measures in Alzheimer's Diseas, CVPR 2012 (*equal contribution)


I've uploaded my work on wireless telemonitoring of fetal ECG using compressed sensing. I planed to release the code once I have finally tested the code and optimized it. But since I listed the paper on my homepage, I received many emails asking for the code (Wow, it seems that many people are interested in telemonitoring/BCI, and are curious how compressed sensing is helpful to this field). So I am going to release the code of debug version tomorrow:

Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning, submitted to IEEE Trans. on Biomedical Engineering


We have an invited review paper on sparse Bayesian learning for structured signals. This paper reviewed our lab's work on SBL. We also hoped to include all the interesting SBL papers for structured signals. But since this is a short paper, we had to select limited papers.

Bhaskar D. Rao, Zhilin Zhang, Yuzhe Jin, Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector Correlation, International Conference on Signal Processing and Communications (SPCOM 2012)


I (and my friend, Lei, together) was also invited to write a review paper on SBL for a Chinese journal. So, I posted my part in my homepage, which can be downloaded here: http://dsp.ucsd.edu/~zhilin/papers/SBL_introduction.pdf. I hope this short review can give Chinese people a bird-eye on my work on SBL.


Finally, I was selected to attend the Doctoral Consortium of CVPR 2012. I will have a  poster to present all my work on SBL. I am eager to see you and receive your comments.









Monday, April 16, 2012

Celebrating the 50th Anniversary of the Neurosciences Research Program: Neuroscience and Higher Brain Function: State of the Art

This Tuesday The Neurosciences Institute holds a workshop, titled "Neuroscience and Higher Brain Function: State of the Art". Here is the schedule: http://www.nsi.edu/nrp/index.html. See you tomorrow!

Celebrating the 50th Anniversary of the Neurosciences Research Program

Neuroscience and Higher Brain Function: State of the Art
Tuesday, April 17, 2012
The Neurosciences Institute Auditorium
10640 John Jay Hopkins Drive, San Diego, CA 92121
8:30 am - 5:30 pm

Welcome and Introduction
8:30 amGerald Edelman
Cortex: A View from the Top
Chairman: Gerald Edelman
8:40 amRanulfo RomoPerceptual Decision Processes across Cortex

Jon KaasCortical Sensorimotor Networks

Thomas AlbrightOn the Perception of Probable Things

Memory and Cognitive Control
Chairman: Larry Squire
10:50 amGyörgy BuzsakiNeural Syntax

John O'KeefeMemory Systems of the Medial Temporal Lobes

Mark D'EspositoModularity, Networks, and Cognitive Control

12:45 pm Lunch
Value and Cognition
Chairman: Einar Gall
1:45 pmMichael Merzenich "Cultural Neuroscience": How the World Changes Your Brain and Our Brains Change the World

Joaquin FusterThe Prefrontal Cortex: Executive, Controller, or Enabler of the Perception/Action Cycle?

Yasushi MiyashitaCognitive Memory and its Cellular/Network Machinery: How Global Brain-wide Networks Interact with Local Micro-circuits

Diagnosis and Repair
Chairman: Gerald Edelman
3:55 pmEberhard FetzBidirectional Interactions between the Brain and Implantable Computers

Apostolos GeorgopoulosPrewhitening for Brain Discovery
Summary
5:20 pm Gerald Edelman

Friday, April 13, 2012

Discovering oscillatory interaction networks with M/EEG: challenges and breakthroughs

Sometimes people asked me why I am interested in EEG/MEG source localization. In their eyes EEG/MEG source localization seems to be a direction with minor importance. However, when I researched in ICA with application to EEG/MEG data analysis before 2009, I realized the importance of EEG/MEG source localization, and this is why I started my research on sparse signal recovery in 2009.

I can spend more than 1000 words to detail the reasons. But the following paper well explained for me. Particularly, this nice paper implies the true value of EEG/MEG source localization study: the true value is not to pursuit higher spatial resolution, but serves as a crucial step for mining the brain connectivity. Here is the paper:

Satu Palva, J.Matias Palva, Discovering oscillatory interaction networks with M/EEG: challenges and breakthroughs, Trends in Cognitive Sciences, vol.16, no.4, 2012, pp.219-230

Abstract:
The systems-level neuronal mechanisms that coordinate temporally, anatomically and functionally distributed neuronal activity into coherent cognitive operations in the human brain have remained poorly understood. Synchronization of neuronal oscillations may regulate net- work communication and could thus serve as such a mechanism. Evidence for this hypothesis, however, was until recently sparse, as methodological challenges limit the investigation of interareal interactions with non- invasive magneto- and electroencephalography (M/EEG) recordings. Nevertheless, recent advances in M/EEG source reconstruction and clustering methods support complete phase-interaction mappings that are essential for uncovering the large-scale neuronal assemblies and their functional roles. These data show that synchroniza- tion is a robust and behaviorally significant phenomenon in task-relevant cortical networks and could hence bind distributed neuronal processing to coherent cognitive states.







Paper Club: Whatever Next? Predictive Brains, Situated agents and the future of cognitive science

Today 11 am at Ed's lab (3509 Mandler) we will discuss a very interesting and comprehensive paper:

Andy Clark (2012) "Whatever Next? Predictive Brains, Situated agents and the future of cognitive science", to appear in Behavioral and Brain Sciences

Here is the download link:  http://www.fil.lu.se/files/eventfile2811.pdf

Abstract: Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. The paper critically examines this ‘hierarchical prediction machine’ approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.

My comment: Although the paper does not involve any computational tools or specific computational frameworks, it in fact points out the incapability of ICA and sparse coding in the computational vision field. Also, I further understand that why deep belief networks and other hierarchical Bayesian models are so promising in the computational vision field.




Thursday, April 12, 2012

My New and Permanent Homepage

Considering I will graduate and will leave UCSD, I made another homepage: https://sites.google.com/site/researchbyzhang/,  and this should be my permanent homepage no matter where I am. Welcome to visit my new homepage.




Sunday, April 8, 2012

Presentation: COMPRESSED SENSING AND SPARSE SIGNAL RECOVERY BY SPARSE BAYESIAN LEARNING: MODELS, ALGORITHMS, AND APPLICATIONS

In this Thursday's Research Expo 2012 (April 12), I will present my previous and on-going work on sparse signal recovery/compressed sensing using sparse Bayesian learning (SBL).

Here is the abstract:

Compressed sensing / sparse signal recovery is a hot field in signal processing. Numerous algorithms have been proposed and have shown promising successes in applications. Among these algorithms, sparse Bayesian learning (SBL) has outstanding performance. In this presentation I will summarize our lab's recent work on SBL. I will present four new models that data-adaptively learn and exploit signals' temporal, spatial, spatiotemporal, and dynamic information. The derived algorithms from these models have shown the best, or at least top-tier, performance among existing compressed sensing algorithms in both computer simulations and practical applications (e.g. telemonitoring, biomarker selection in gene expression, source localization, earthquake detection, neuroimaging). Particularly, some of them have:

(1) solved the challenge of non-sparse physiological signals (e.g. fetal ECG contaminated by strong noise, EEG, EMG, etc) telemonitoring via wireless body-area networks with ultra-low power consumption, which was not solved before;  (this application will show SBL's ability to recover non-sparse signals with little distortion, using simple sparse binary matrices as its sensing matrices)
(2) achieved higher EEG source localization accuracy than other famous algorithms in more complicated environments (this application will show SBL's excellent ability under strong noise ( 0dB), highly coherent sensing matrix, and frequently changing signals);
(3) broke the record of predicting cognitive outcomes from neuroimaging measures in Alzheimer's disease in 2011 (showing SBL's ability to  deal with highly coherent sensing matrix);
(4) Brain-Computer Interface (solving the speed problem of SBL)
(5) obtained the best accuracy in earthquake detection in some common datasets;

Some of these work have been published or submitted to: IEEE Trans. on Signal Processing, IEEE Journal of Selected Topics in Signal Processing, Proceedings of the IEEE, IEEE Trans. on Biomedical Engineering, NeuroImage, CVPR 2012, and ICASSP 2010, 2011, 2012. Also, a US patent is pending.

Welcome to hear my presentation on the recent progress on various spatiotemporal SBL models!

------------------------------------------------------------------------
Photo: N. talangensis (taken by April 7)