My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Thursday, March 22, 2012

Our three papers on BCI published in the Proceedings of the IEEE

In this May, the Proceedings of the IEEE will publish a special centennial celebration issue: Reviewing the Past, the Present, and the Future of Electrical Engineering Technology. In this issue our lab and our collaborative labs have three papers on brain-computer interface (BCI). They are:

Brent J. Lance, Scott E. Kerick, Anthony J. Ries, Kelvin S. Oie, and Kaleb McDowell, Brain-Computer Interface Technologies in the Coming Decades.

This paper focuses on using online brain–signal processing to enhance human–computer interactions; it highlights past and current BCI applications and proposes future technologies that will make significant expansion into education, entertainment, rehabilitation, and human–system performance domains.


Lun-De Liao, Chin-Teng Lin, Kaleb McDowell, Alma E. Wickenden, Klaus Gramann, Tzyy-Ping Jung, Li-Wei Ko, and Jyh-Yeong Chang, Biosensor Technologies for Augmented Brain-Computer Interfaces in the Next Decades.

This paper focuses on recent and projected advances of a wide range of sensor and acquisition neurotechnologies enabling online brain–signal processing in everyday, real-life environments, and highlights current and future approaches to address the challenges in this field. 


Scott Makeig, Christian Kothe, Tim Mullen, Nima Bigdely-Shamlo, Zhilin Zhang, Kenneth Kreutz-Delgado, Evolving Signal Processing for Brain-Computer Interface

This paper discuss the challenges associated with building robust and useful BCI models from accumulated biological knowledge and data, and the technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may become ubiquitous in the future.

The third paper has been introduced in my previous post. Now the final version can be downloaded from here.

The first two papers can be downloaded from the IEEE Xplore.

My comments: When talking about BCI, many people just think of such a scenario: a person, wearing a strange cap with electrodes, is sitting in front of a computer screen, watching the symbols appeared in the screen. But this is a traditional BCI. In fact, the BCI concept is largely widened now. Scott once gave a talk on one of the future directions, titled " Imaging Human Agency with Mobile Brain/Body Imaging (MoBI) ". The video is here http://sccn.ucsd.edu/eeglab/Online_EEGLAB_Workshop/EEGLAB12_MoBI.html

This direction is very important. In fact, I strongly feel such technique can open new worlds in several fields and encourage the hybrid of some traditional fields.


Saturday, March 17, 2012

The future of FMRI connectivity

There is a nice review paper on FMRI connectivity in Neuroimage:

Stephen M. Smith, The future of FMRI connectivity, NeuroImage, accepted by 2012.

Clearly, it attracts the interests of many researchers in this field, since it has been one of the most downloaded papers in the journal (but now it is just an accepted paper)!

Here is the abstract:

“FMRI connectivity” encompasses many areas of research, including resting-state networks, biophysical modelling of task-FMRI data and bottom-up simulation of multiple individual neurons interacting with each other. In this brief paper I discuss several outstanding areas that I believe will see exciting developments in the next few years, in particular concentrating on how I think the currently separate approaches will increasingly need to take advantage of each others' respective complementarities.

And the outline of the contents:
Contents
Introduction - brief review of concepts
      Networkmodellingvianodesandedges;functionalvs.effectiveconnectivity.
      Spatial patterns of connectivity
      Connectivity modelling from multiple subjects
Model complexity
      Bottom-up modelling
      Graph theory
      FMRI network modelling methods
Causality
      Patterns of conditional independence; observational vs. interventional studies
      Dynamic biological Bayesian models
      Future
Nonlinearities and temporal nonstationarities
Other issues… and conclusions

Although this is a brief review paper, the author has tried to cover many important aspects of fMRI connectivity. But I think there are two aspects may need to put more words. One is the sparsity based models in the section of  model complexity. The second is how to verify the fidelity of an estimated connectivity network. Hope I can see the two issues especially the second one are discussed in details in future's review papers.

Friday, March 16, 2012

Performance measurement index for compressed sensing of structured signals

Compressed sensing/sparse signal recovery has entered a new phase, i.e. exploiting structure of signals for improved performance. Almost all the natural signals have rich structure (e.g. images, videos, speech signals, bio-signals), and we have known theoretically and empirically that exploiting such structure can improve performance. However, in the literature on compressed sensing of structured signals, the mean square error (MSE) still serves as the main (or even the only) performance index for measuring recovery quality. It's time to re-think about the use of MSE if you read the following nice paper:

Zhou Wang, AlanC.Bovik, Mean Squared Error: Love it or Leave it? A New Look at Signal Fidelity Measures, IEEE Signal Processing Magazine, vol.26, no.1, 2009, pp.98-117

The authors list the implicit assumptions when using MSE, which I quote below:

1) Signal fidelity is independent of temporal or spatial relationships between the samples of the original signal. In other words, if the original and distorted signals are randomly re-ordered in the same way, then the MSE between them will be unchanged.

2) Signal fidelity is independent of any relationship between the original signal and the error signal. For a given error signal, the MSE remains unchanged, regardless of which original signal it is added to.

3) Signal fidelity is independent of the signs of the error signal samples.

4) All signal samples are equally important to signal fidelity. 


Obviously, when we measure the recovery quality of structured signals, the above assumptions are violated.

The authors give a number of nice examples. Here is one of them:


(a) is the original image, and (b)-(d) are three images added noise. (b),(c),(d) almost have the same MSE, but clearly their recovery quality is different. MSE fails to show such difference. In contrast, the other two measurement indexes, SSIM and CW-SSIM, express well the difference.

SSIM, standing for Structural SIMilarity (SSIM) index, is proposed for structured signals, especially images. The basic form of SSIM (measuring small patches of an image) is:
which measures the similarities of three aspects of the image patches: the similarity l(x,y) of the local patch luminances (brightness values), the similarity c(x,y) of the local patch contrasts, and the similarity s(x,y) of the local patch structures. The SSIM index is computed locally within a sliding window that moves pixel-by-pixel across the image. The SSIM score of the entire image is then computed by simple averaging the SSIM values across the image.

There are many variants of the basic SSIM index. Interested people can read the paper and the references cited.

The codes of computing SSIM can be found here: https://ece.uwaterloo.ca/~z70wang/research/ssim/

Note that the SSIM index can be easily modified to measure 1-D structured signals.

Added:
Igor has two posts on SSIM in his Nuit Blanche
http://nuit-blanche.blogspot.com/2011/11/randomized-thoughts-and-feedbacks.html

And here is the most recent improvement on SSIM:

CALIBRATING MS-SSIM FOR COMPRESSION DISTORTIONS USING MLDS, by C. Charrier, K. Knoblauch, L. T. Maloney and A. C. Bovik, ICIP 2011.

Thursday, March 1, 2012

A paper has been accepted by CVPR 2012

We have a new paper just accepted by CVPR 2012:

Sparse Bayesian Multi-Task Learning for Predicting Cognitive Outcomes from Neuroimaging Measures in Alzheimer's Disease.

This study proposed a sparse Bayesian multi-task learning algorithm to improve the prediction accuracy on the cognitive outcomes from neuroimaging measures in Alzheimer's disease. A variant of T-MSBL was proposed, and its connection to existing algorithms in this field was established, showing the advantages of the T-MSBL family. We achieved the highest prediction accuracy, compared to the latest results published in top journals in 2011.

I will introduce the paper in details in my next post. The camera-ready can be downloaded from here, and the code will be posted soon.

Saturday, February 25, 2012

My baby was born on last Saturday

The gift by the God:
(picture by cell-phone)

No doubt, my new life starts...

Thursday, February 9, 2012

Compressed Sensing Talks in ITA Workshop in San Diego- Part II (Thursday)

In my previous post I definitely missed some talks in this ITA.

Tomorrow (Thursday) there will be many interesting talks on compressed sensing:
8:50: On L0 search for low-rank matrix completion, by Wei Dai, Imperial College London, Ely Kerman, UIUC, Olgica Milenkovic, UIUC

9:10 Orthogonal matching pursuit with replacement, by Inderjit Dhillon, University Of Texas, Prateek Jain, Microsoft, Ambuj Tewari, University Of Texas

3:00 Sparse sampling: bounds and applications, by Martin Vetterli, EPFL

4:15: Bilinear generalized approximate message passing (BiG-AMP) for matrix recovery problems Phil Schniter, Ohio State, Volkan Cevher, EPFL

There is another talk at the same time:
Construction of low-coherence frames using group theory, by Babak Hassibi, Caltech, Matthew Thill, Caltech

4:35:  Sparse recovery with graph constraints, by Meng Wang, Cornell, Weiyu Xu, Cornell, Enrique Mallada, Cornell, Kevin Tang, Cornell

4:55: Asymptotic analysis of complex LASSO via complex approximate message passing, by Arian Maleki, Rice, Laura Anitori, TNO, Netherlands, Zai Yang, Nanyang Technological University, Richard Baraniuk, Rice

In addition to the compressed sensing talks, there are many interesting talks on Music Information Retrieval, Clustering, Learning Theory, Graphical Models and Inference, and Statistical Machine learning & Applications.

Thursday will be a wonderful day.

Monday, February 6, 2012

A New Paper: Evolving Signal Processing for Brain-Computer Interface

We have a survey paper on BCI recently accepted by Proceedings of the IEEE (Special 100th Anniversary Issue):

Scott Makeig, Christian Kothe, Tim Mullen, Nima Bigdely-Shamlo, Zhilin Zhang, Kenneth Kreutz-Delgado, Evolving Signal Processing for Brain-Computer Interface, Proceedings of the IEEE, 2012

The paper surveys the past, the present, and the future of signal processing and machine learning in the cognitive state assessment especially BCI, wireless EEG, and mobile EEG.

The paper can be downloaded from here

Here is abstract:
Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Brain-computer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.

Friday, February 3, 2012

Compressed Sensing Talks in ITA Workshop in San Diego (Sunday 2/5 - Friday 2/10)

From this Sunday we will have a great annual academic even in San Diego: ITA Workshop. Each year, the workshop invites many well-established scholars in the field of compressed sensing to give talks.

Here is the workshop calendar: http://ita.ucsd.edu/workshop/12/talks

Particularly, I found the following talks on compressed sensing/sparse signal recovery (I probably missed some):

Monday:
11:20: Quick partial sparse support recovery by Vincent Poor, Princeton, Ali Tajer, Princeton
3:00:  Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing by David Donoho, Stanford, Adel Javanmard, Stanford, Andrea Montanari, Stanford

Thursday (I missed some interesting talks in this day. More complete list can be seen here: http://marchonscience.blogspot.com/2012/02/compressed-sensing-talks-in-ita_09.html):
8:50: On L0 search for low-rank matrix completion, by Wei Dai, Imperial College London, Ely Kerman, UIUC, Olgica Milenkovic, UIUC
9:10: Orthogonal matching pursuit with replacement, by Inderjit Dhillon, University Of Texas, Prateek Jain, Microsoft, Ambuj Tewari, University Of Texas
3:00: Sparse sampling: bounds and applications by Martin Vetterli, EPFL
3:40: Compressive depth acquisition cameras: Principles and demonstrations by Vivek Goyal, MIT
4:15: Construction of low-coherence frames using group theory by Babak Hassibi, Caltech, Matthew Thill, Caltech
4:15: Bilinear generalized approximate message passing (BiG-AMP) for matrix recovery problems Phil Schniter, Ohio State, Volkan Cevher, EPFL
4:35: Sparse recovery with graph constraints by Meng Wang, Cornell, Weiyu Xu, Cornell, Enrique Mallada, Cornell, Kevin Tang, Cornell
4:55:  Asymptotic analysis of complex LASSO via complex approximate message passing by Arian Maleki, Rice, Laura Anitori, TNO, Netherlands, Zai Yang, Nanyang Technological University, Richard Baraniuk, Rice

Friday:
11:20: Faster algorithms for sparse fourier transform, by Haitham Hassanieh, MIT, Piotr Indyk, MIT, Dina Katabi, MIT, Eric Price, MIT
Compressive sensing meets group testing: LP decoding for non-linear (disjunctive) measurements, by Chun Lam Chan, CUHK, Sidharth Jaggi, CUHK, Venkatesh Saligrama, BU, Samar Agnihotri, CUHK
1:35: The Big Data bootstrap, by Ariel Kleiner, UC Berkeley, Ameet Talwalkar, UC Berkeley, Purna Sarkar, UC Berkeley, Michael Jordan, UC Berkeley

In addition to these talks, there are other interesting talks on high-dimensional data analysis, information theory, and neuroscience/AI.

Next week should be a wonderful week, except an unhappy thing: this year ITA will be hold in a hotel in San Diego, not in UCSD campus as previous years. It's so inconvenient :(

Thursday, January 26, 2012

Andrew Ng: Machine learning and AI via large scale brain simulations

The location is changed to: CALIT2 ~ Atkinson Hall Auditorium
Time: Monday, January 30th, 2012, 11:00 am

Abstract

By building large-scale simulations of cortical (brain) computations, can
we enable revolutionary progress in AI and machine learning? Machine
learning often works very well, but can be a lot of work to apply because
it requires spending a long time engineering the input representation (or
"features") for each specific problem. This is true for machine learning
applications in vision, audio, text/NLP and other problems.
To address this, researchers have recently developed "unsupervised feature
learning" and "deep learning" algorithms that can automatically learn
feature representations from unlabeled data, thus bypassing much of this
time-consuming engineering. Many of these algorithms are developed using
simple simulations of cortical (brain) computations, and build on such
ideas as sparse coding and deep belief networks. By doing so, they exploit
large amounts of unlabeled data (which is cheap and easy to obtain) to
learn a good feature representation. These methods have also surpassed the
previous state-of-the-art on a number of problems in vision, audio, and
text. In this talk, I describe some of the key ideas behind unsupervised
feature learning and deep learning, and present a few algorithms. I also
speculate on how large-scale brain simulations may enable us to make
significant progress in machine learning and AI, especially perception.
This talk will be broadly accessible, and will not assume a machine
learning background.

Tuesday, January 24, 2012

Literature-CS: Sparse Signal Recovery/Compressed Sensing of ICASSP 2012

ICASSP 2012 has posted the technical program: http://www.icassp2012.com/RegularProgram.asp

Here are the sections on sparse signal recovery/compressed sensing:


SPTM-P6: Joint SPTM/SPCOM Session: Sampling Sparsity and Reconstruction II

SPCOM-P2: Sampling, Coding and Modulation

SPTM-L3: Compressed Sensing and Sparsity I

SPTM-L4: Compressed Sensing and Sparsity II

SPTM-L5: Compressed Sensing and Sparsity III

SPCOM-L4: Sparse Signal Processing for Communications and Networking

SPTM-P9: Sampling and Reconstruction

SAM-P5: Joint SAM/SPTM Session: Compressed Sensing and Sparse Signal Modeling


My paper will be presented at the section: SPTM-L4: Compressed Sensing and Sparsity II

The title is:
Z.Zhang, B.D.Rao, Recovery of Block Sparse Signals Using the Framework of Block Sparse Bayesian Learning

You can read it now from my website: http://sccn.ucsd.edu/%7Ezhang/Zhang_ICASSP2012.pdf
Codes can be downloaded at: http://sccn.ucsd.edu/%7Ezhang/BSBL_EM_Code.zip

The paper is the early work of the journal version:

Z.Zhang, B.D.Rao, Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation

The paper can be obtain from: http://arxiv.org/abs/1201.0862

Tuesday, January 10, 2012

A New Paper: Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation

We just finished a paper on block sparse model, which considers to exploit intra-block correlation with known or unknown block partition:

Zhilin Zhang, Bhaskar D. Rao , Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation, submitted to IEEE Transaction on Signal Processing, January 2012

The associated codes can be downloaded here: https://sites.google.com/site/researchbyzhang/bsbl

Here is the abstract:

We examine the recovery of block sparse signals and extend the framework in two important directions; one by exploiting intra-block correlation and the other by generalizing the block structure. We propose two families of algorithms based on the framework of block sparse Bayesian learning (bSBL). One family, directly derived from the bSBL framework, requires knowledge of the block partition. Another family, derived from an expanded bSBL framework, is based on a weaker assumption about the a priori information of the block structure, and can be used in the cases when block partition, block size, block sparsity are all unknown. Using these algorithms we show that exploiting intra-block correlation is very helpful to improve recovery performance. These algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation for improved performance.

The paper can be downloaded here: http://arxiv.org/abs/1201.0862. The codes will be posted soon. But you can send emails to me for these codes right now.

In this paper, we proposed three algorithms (BSBL-EM, BSBL-BO, BSBL-L1) for the block sparse model when block partition is known, and three algorithms (EBSBL-EM, EBSBL-BO, EBSBL-L1) for the model when block partition is unknown.

Here are some highlights:

[1] These algorithms have the best recovery performance among all the existing algorithms

I have spent more than one month to read published algorithms, downloaded their codes, performed experiments, and sending emails to authors to ask for optimal tuning of parameters, etc. I did't find any existing algorithms have better performance than mine. If you find one, please let me know. 


Here is a comparison among all well-known algorithms when block partition is given (signal length was fixed while we changed the measurement number; see the paper for details) :

Here is a comparison among existing algorithms when block partition is unknown (signal length, measurement number, and the number of nonzero elements in the signal were fixed while we changed the nonzero block number; each block had random size and location. See the paper for details)



[2] These algorithms are the first algorithms that adaptively exploit intra-block correlation, i.e. the correlation among elements of a block.


[3] We revealed that intra-block correlation, if exploited, can significantly improve recovery performance and reduce the number of measurements.

Here is an experiment result showing our algorithms have better performance when intra-block correlation increases (see the paper for details)

[4] We also found that the intra-block correlation has little effects on the performance of existing algorithms. This is different to our finding on the MMV model, where we found temporal correlation has obvious negative effects on the performance of existing algorithms (for temporal correlation on the algorithm performance, see here).

Here is an experiment result showing the performance of Block-CoSaMP and Block-OMP is almost not affected by the intra-block correlation (see the paper for details).

Tuesday, November 8, 2011

Updated T-MSBL code

I just now updated the T-MSBL/T-SBL code. So, using the updated version, you need NOT to consider the tuning of parameters for a general compressed sensing problem. By a general compressed sensing problem, I mean the columns of the matrix A has unit L2-norm. When your problem does not satisfy this, you can first transform your original problem:
Y = A X + V
to
Y = A W W^{-1} X + V  = A' X' + V
such that A' has unit-norm columns. Once you obtain the result, you can obtain X by X = W X'.


 The calling of T-MSBL is easy:


o   When noise is large (e.g. SNR <=6 dB)
X_est = TMSBL(A, Y, 'noise', 'large')

o   When noise is mild (e.g. 7 dB <= SNR <=22 dB)
X_est = TMSBL(A, Y, 'noise', 'mild')

o   When noise is small (e.g. SNR >22 dB)
X_est = TMSBL(A, Y, 'noise', 'small')

o   When no noise
                                     X_est = TMSBL(A, Y, 'noise', 'no')

But note that the above number 6dB or 22dB is not an exact value. The two values just give you a rough concept of what is the 'small noisy case', what is the 'mild noisy case', and what is the 'strongly noisy case'.
In this sense, this does not mean T-MSBL requires to know the noise level.

When you use T-MSBL in some practical problems when you really have no idea what is the range of noise strength (such as gene feature extraction), simply use the calling corresponding to the 'mild noise case', i.e.
X_est = TMSBL(A, Y, 'noise', 'mild') 


I will update the code in the near future, such that in any case(noisy, noiseless, real variable, complex variable, large-scale data or small-scale data)  you only need to input X_est = TMSBL(A,Y). But I currently am very busy on my on-going papers (four journal papers in three fields), so please forgive me that I cannot do this now.







Friday, November 4, 2011

Minisymposium on New Dimensions in Brain-Machine Interfaces at UCSD

Wednesday, November 9, 2011
1pm-6pm
Fung Auditorium
Powell-Focht Bioengineering Hall
UC San Diego

The minisymposium highlights latest advances and emerging directions in
brain-machine and neuron-silicon interface technology and their
applications to neuroscience and neuroengineering.  Topics include
high-dimensional EEG and ECoG systems, wireless and unobtrusive
brain-machine interfaces, flexible bioelectronics, real-time decoding of
brain and motor activity, and signal processing methods for intelligent
human-system interfaces.


PROGRAM

1:00-1:10pm    Welcome

1:10-1:50pm    Engineering hope with biomimetic systems
              Wentai Liu, UC Santa Cruz

1:50-2:30pm    A low power system-on-chip design for real-time ICA based BCI applications
              Wai-Chi Fang, National Chiao-Tung University, Taiwan

2:30-3:10pm    Developing practical non-contact EEG electrodes
              Yu Mike Chi, Cognionics

3:10-3:50pm    A new platform for BCI: from iBrain to the Stephen Hawking project
              Philip Low, Neurovigil


3:50-4:20pm    Coffee break


4:20-5:00pm    Interdisciplinary approaches to design high performance brain-machine interfaces
              Todd P. Coleman, UC San Diego

5:00-5:40pm    Evolving data collection and signal processing methods for intelligent human-system interfaces
              Scott Makeig, UC San Diego

5:40-6:00pm    Panel discussion


Organized by:

Tzyy-Ping Jung <tpjung@ucsd.edu>
Center for Advanced Neurological Monitoring,
Institute of Engineering in Medicine <http://iem.ucsd.edu>, and
Institute for Neural Computation <http://inc.ucsd.edu>

With support from:

Qualcomm <http://www.qualcomm.com>, and
Brain Corporation <http://www.braincorporation.com>