My blogs reporting quantitative financial analysis, artificial intelligence for stock investment & trading, and latest progress in signal processing and machine learning

Friday, March 30, 2012

Literature-ICA/LVA: Recently published/accepted ICA papers

Here are other recently published/accepted ICA papers. Enjoying!

-------------------------------------------------------------------------------------------------------------------
Shashwath A. Meda, Balaji Narayanan, Jingyu Liu, Nora I. Perrone-Bizzozero, Michael C. Stevens, Vince D. Calhoun, David C. Glahn, Li Shen, Shannon L. Risacher, Andrew J. Saykin, Godfrey D. Pearlson, A large scale multivariate parallel ICA method reveals novel imaging-genetic relationships for Alzheimer's disease in the ADNI cohortNeuroImage, vol.60, 2012

The underlying genetic etiology of late onset Alzheimer's disease (LOAD) remains largely unknown, likely due to its polygenic architecture and a lack of sophisticated analytic methods to evaluate complex genotype-phenotype models. The aim of the current study was to overcome these limitations in a bimultivariate fashion by linking intermediate magnetic resonance imaging (MRI) phenotypes with a genome-wide sample of common single nucleotide polymorphism (SNP) variants. We compared associations between 94 different brain regions of interest derived from structural MRI scans and 533,872 genome-wide SNPs using a novel multivariate statistical procedure, parallel-independent component analysis, in a large, national multi-center subject cohort. The study included 209 elderly healthy controls, 367 subjects with amnestic mild cognitive impairment and 181 with mild, early-stage LOAD, all of them Caucasian adults, from the Alzheimer's Disease Neuroimaging Initiative cohort. Imaging was performed on comparable 1.5 T scanners at over 50 sites in the USA/Canada. Four primary "genetic components" were associated significantly with a single structural network including all regions involved neuropathologically in LOAD. Pathway analysis suggested that each component included several genes already known to contribute to LOAD risk (e.g. APOE4) or involved in pathologic processes contributing to the disorder, including inflammation, diabetes, obesity and cardiovascular disease. In addition significant novel genes identified included ZNF673, VPS13, SLC9A7, ATP5G2 and SHROOM2. Unlike conventional analyses, this multivariate approach identified distinct groups of genes that are plausibly linked in physiologic pathways, perhaps epistatically. Further, the study exemplifies the value of this novel approach to explore large-scale data sets involving high-dimensional gene and endophenotype data.

[My comments: I really like this paper, not only the algorithm used but also the application. This application is closely related to the one in my CVPR paper. And, some of the authors here are also the authors of my CVPR paper.]



-------------------------------------------------------------------------------------------------------------------
Matthew Anderson, Tülay Adalı, Xi-Lin Li, Joint Blind Source Separation With Multivariate Gaussian Model: Algorithms and Performance Analysis, IEEE Trans. on Signal Processing, vol.60, no.4, 2012

Abstract: In this paper, we consider the joint blind source separation (JBSS) problem and introduce a number of algorithms to solve the JBSS problem using the independent vector analysis (IVA) framework. Source separation of multiple datasets simultaneously is possible when the sources within each and every dataset are independent of one another and each source is dependent on at most one source within each of the other datasets. In addition to source separation, the IVA framework solves an essential problem of JBSS, namely the identification of the dependent sources across the datasets. We propose to use the multivariate Gaussian source prior to achieve JBSS of sources that are linearly dependent across datasets. Analysis within the paper yields the local stability conditions, nonidentifiability conditions, and induced Cramér-Rao lower bound on the achievable interference to source ratio for IVA with multivariate Gaussian source priors. Additionally, by exploiting a novel nonorthogonal decoupling of the IVA cost function we introduce both Newton and quasi-Newton optimization algorithms for the general IVA framework.

[My comments: Joint analysis of multiple datasets is a very important and meaningful topic in biomedicine. The topic is a hot one not only in BSS but also in other machine learning subfields, even in sparse signal recovery/L1-penalized regression in high-dimensional space]



-------------------------------------------------------------------------------------------------------------------

Gautam V. Pendse, PMOG: The projected mixture of Gaussians model with application to blind source separation, Neural Networks, vol.28, 2012, pp.40-60

Abstract: We extend the mixtures of Gaussians (MOG) model to the projected mixture of Gaussians (PMOG) model. In the PMOG model, we assume that q dimensional input data points zi are projected by a q dimensional vector w into 1-D variables ui. The projected variables ui are assumed to follow a 1-D MOG model. In the PMOG model, we maximize the likelihood of observing ui to find both the model parameters for the 1-D MOG as well as the projection vectorw. First, we derive an EM algorithm for estimating the PMOG model. Next, we show how the PMOG model can be applied to the problem of blind source separation (BSS). In contrast to conventional BSS where an objective function based on an approximation to differential entropy is minimized, PMOG based BSS simply minimizes the differential entropy of projected sources by fitting a flexible MOG model in the projected 1-D space while simultaneously optimizing the projection vector w. The advantage of PMOG over conventional BSS algorithms is the more flexible fitting of non-Gaussian source densities without assuming near-Gaussianity (as in conventional BSS) and still retaining computational feasibility.

[My comments: MOG is a very useful probabilistic model for BSS algorithms. I am glad to read this paper]



 -------------------------------------------------------------------------------------------------------------------
Jen-Tzung Chien, Hsin-Lung Hsieh, Convex Divergence ICA for Blind Source Separation, IEEE Trans. on Audio, Speech, and Language Processing, vol.20, no.1, 2012

Abstract: Independent component analysis (ICA) is vital for unsupervised learning and blind source separation (BSS). The ICA unsupervised learning procedure attempts to demix the observation vectors and identify the salient features or mixture sources. This work presents a novel contrast function for evaluating the dependence among sources. A convex divergence measure is developed by applying the convex functions to the Jensen’s inequality. Adjustable with a convexity parameter, this inequality-based divergence measure has a wide range of the steepest descents to reach its minimum value. A convex divergence ICA (C-ICA) is constructed and a nonparametric C-ICA algorithm is derived with different convexity parameters where the non-Gaussianity of source signals is characterized by the Parzen window-based distribution. Experimental results indicate that the specialized C-ICA significantly reduces the number of learning epochs during estimation of the demixing matrix. The convergence speed is improved by using the scaled natural gradient algorithm. Experiments on the BSS of instantaneous, noisy and convolutive mixtures of speech and music signals further demonstrate the superiority of the proposed C-ICA to JADE, Fast-ICA, and the nonparametric ICA based on mutual information.

[My comments: A nice paper dealing with dependence among sources]



 -------------------------------------------------------------------------------------------------------------------

Martin Kleinsteuber, Hao Shen, Blind Source Separation With Compressively Sensed Linear Mixtures, IEEE Signal Processing Letters, vol.19, no.2, 2012

Abstract: This work studies the problem of simultaneously separating and reconstructing signals from compressively sensed linear mixtures. We assume that all source signals share a common sparse representation basis. The approach combines classical Compressive Sensing (CS) theory with a linear mixing model. It allows the mixtures to be sampled independently of each other. If samples are acquired in the time domain, this means that the sensors need not be synchronized. Since Blind Source Separation (BSS) from a linear mixture is only possible up to permutation and scaling, factoring out these ambiguities leads to a minimization problem on the so-called oblique manifold. We develop a geometric conjugate subgradient method that scales to large systems for solving the problem. Numerical results demonstrate the promising performance of the proposed algorithm compared to several state of the art methods.
[My comments: It's interesting to see the hybrid of compressed sensing and ICA]



 -------------------------------------------------------------------------------------------------------------------
Fasong Wang, Linrang Zhang, Rui Li, Harmonic retrieval by period blind source extraction method: Model and algorithm, accepted by Digital Signal Processing, 2012

Abstract: A frequently encountered problem in signal processing field is harmonic retrieval in additive colored Gaussian or non-Gaussian noise, especially when the frequency of the harmonic signals are closely spaced in frequency domain. The purpose of this paper is to develop novel harmonic retrieval algorithm based on blind source extraction(BSE) method from linear mixtures of harmonic signals using only one observed channel signal. First, we establish the blind source separation(BSS) based harmonic retrieval model in additive noise using the only one observed channel, at the same time the fundamental principle of BSE based harmonics retrieval algorithm is analyzed in detail. Then, based on the established harmonic BSS model, we propose a BSE approach to the harmonic retrieval using the concept of period BSE method, as a result, the harmonic retrieval algorithm using only one channel mixed signals is derived. Simulation results show that the proposed algorithm is able to separate the harmonic source signals and yield ideal performance.

[My comments: Glad to see another model for blind source extraction using only one channel signal. And more glad to see my previous work has been cited here. But I really hope to see what's the performance when used to extract FECG.]









Literature-ICA/LVA: Special Issue on Latent Variable Analysis and Signal Separation

Signal Processing has a special issue on Latent Variable Analysis and Signal Separation. (Volume 92, Issue 8, Pages 1765-1960 (August 2012)). Now the papers can be downloaded from the website: http://www.sciencedirect.com/science/journal/01651684/92/8



Below are the papers:

Consistency and asymptotic normality of FastICA and bootstrap FastICA
Nima Reyhani, Jarkko Ylipaavalniemi, Ricardo Vigário, Erkki Oja

Independent component analysis based on first-order statistics
V. Zarzoso, R. Martín-Clemente, S. Hornillo-Mellado

ICA-based and second-order separability of nonlinear models involving reference signals: General properties and application to quantum bits
Yannick Deville

ICA over finite fields—Separability and algorithms
Harold W. Gutch, Peter Gruber, Arie Yeredor, Fabian J. Theis

Stability of independent vector analysis
Takashi Itahashi, Kiyotoshi Matsuoka

Complex-valued independent vector analysis: Application to multivariate Gaussian model
Matthew Anderson, Xi-Lin Li, Tülay Adalı

Multiple-snapshots BSS with general covariance structures: A partial maximum likelihood approach involving weighted joint diagonalization
Arie Yeredor

Extraction of signals with higher order temporal structure using Correntropy
Eder Santana, Jose C. Principe, Ewaldo Santana, Allan Kardec Barros

Algorithms for probabilistic latent tensor factorization
Y. Kenan Yılmaz, A. Taylan Cemgil

Supervised input space scaling for non-negative matrix factorization
J. Driesen, H. Van hamme

ISI sparse channel estimation based on SL0 and its application in ML sequence-by-sequence equalization
Rad Niazadeh, Sina Hamidi Ghalehjegh, Massoud Babaie-Zadeh, Christian Jutten

A tractable framework for estimating and combining spectral source models for audio source separation
Simon Arberet, Alexey Ozerov, Frédéric Bimbot, Rémi Gribonval

Regulatory component analysis: A semi-blind extraction approach to infer gene regulatory networks with imperfect biological knowledge
Chen Wang, Jianhua Xuan, Ie-Ming Shih, Robert Clarke, Yue Wang

Use of bimodal coherence to resolve the permutation problem in convolutive BSS
Qingju Liu, Wenwu Wang, Philip Jackson

The signal separation evaluation campaign (2007–2010): Achievements and remaining challenges
Emmanuel Vincent, Shoko Araki, Fabian Theis, Guido Nolte, Pau Bofill, Hiroshi Sawada, Alexey Ozerov, Vikrham Gowreesunker, Dominik Lutter, Ngoc Q.K. Duong

Informed source separation through spectrogram coding and data embedding
Antoine Liutkus, Jonathan Pinel, Roland Badeau, Laurent Girin, Gaël Richard

Multi-source TDOA estimation in reverberant audio using angular spectra and clustering
Charles Blandin, Alexey Ozerov, Emmanuel Vincent

Thursday, March 22, 2012

Our three papers on BCI published in the Proceedings of the IEEE

In this May, the Proceedings of the IEEE will publish a special centennial celebration issue: Reviewing the Past, the Present, and the Future of Electrical Engineering Technology. In this issue our lab and our collaborative labs have three papers on brain-computer interface (BCI). They are:

Brent J. Lance, Scott E. Kerick, Anthony J. Ries, Kelvin S. Oie, and Kaleb McDowell, Brain-Computer Interface Technologies in the Coming Decades.

This paper focuses on using online brain–signal processing to enhance human–computer interactions; it highlights past and current BCI applications and proposes future technologies that will make significant expansion into education, entertainment, rehabilitation, and human–system performance domains.


Lun-De Liao, Chin-Teng Lin, Kaleb McDowell, Alma E. Wickenden, Klaus Gramann, Tzyy-Ping Jung, Li-Wei Ko, and Jyh-Yeong Chang, Biosensor Technologies for Augmented Brain-Computer Interfaces in the Next Decades.

This paper focuses on recent and projected advances of a wide range of sensor and acquisition neurotechnologies enabling online brain–signal processing in everyday, real-life environments, and highlights current and future approaches to address the challenges in this field. 


Scott Makeig, Christian Kothe, Tim Mullen, Nima Bigdely-Shamlo, Zhilin Zhang, Kenneth Kreutz-Delgado, Evolving Signal Processing for Brain-Computer Interface

This paper discuss the challenges associated with building robust and useful BCI models from accumulated biological knowledge and data, and the technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may become ubiquitous in the future.

The third paper has been introduced in my previous post. Now the final version can be downloaded from here.

The first two papers can be downloaded from the IEEE Xplore.

My comments: When talking about BCI, many people just think of such a scenario: a person, wearing a strange cap with electrodes, is sitting in front of a computer screen, watching the symbols appeared in the screen. But this is a traditional BCI. In fact, the BCI concept is largely widened now. Scott once gave a talk on one of the future directions, titled " Imaging Human Agency with Mobile Brain/Body Imaging (MoBI) ". The video is here http://sccn.ucsd.edu/eeglab/Online_EEGLAB_Workshop/EEGLAB12_MoBI.html

This direction is very important. In fact, I strongly feel such technique can open new worlds in several fields and encourage the hybrid of some traditional fields.


Saturday, March 17, 2012

The future of FMRI connectivity

There is a nice review paper on FMRI connectivity in Neuroimage:

Stephen M. Smith, The future of FMRI connectivity, NeuroImage, accepted by 2012.

Clearly, it attracts the interests of many researchers in this field, since it has been one of the most downloaded papers in the journal (but now it is just an accepted paper)!

Here is the abstract:

“FMRI connectivity” encompasses many areas of research, including resting-state networks, biophysical modelling of task-FMRI data and bottom-up simulation of multiple individual neurons interacting with each other. In this brief paper I discuss several outstanding areas that I believe will see exciting developments in the next few years, in particular concentrating on how I think the currently separate approaches will increasingly need to take advantage of each others' respective complementarities.

And the outline of the contents:
Contents
Introduction - brief review of concepts
      Networkmodellingvianodesandedges;functionalvs.effectiveconnectivity.
      Spatial patterns of connectivity
      Connectivity modelling from multiple subjects
Model complexity
      Bottom-up modelling
      Graph theory
      FMRI network modelling methods
Causality
      Patterns of conditional independence; observational vs. interventional studies
      Dynamic biological Bayesian models
      Future
Nonlinearities and temporal nonstationarities
Other issues… and conclusions

Although this is a brief review paper, the author has tried to cover many important aspects of fMRI connectivity. But I think there are two aspects may need to put more words. One is the sparsity based models in the section of  model complexity. The second is how to verify the fidelity of an estimated connectivity network. Hope I can see the two issues especially the second one are discussed in details in future's review papers.

Friday, March 16, 2012

Performance measurement index for compressed sensing of structured signals

Compressed sensing/sparse signal recovery has entered a new phase, i.e. exploiting structure of signals for improved performance. Almost all the natural signals have rich structure (e.g. images, videos, speech signals, bio-signals), and we have known theoretically and empirically that exploiting such structure can improve performance. However, in the literature on compressed sensing of structured signals, the mean square error (MSE) still serves as the main (or even the only) performance index for measuring recovery quality. It's time to re-think about the use of MSE if you read the following nice paper:

Zhou Wang, AlanC.Bovik, Mean Squared Error: Love it or Leave it? A New Look at Signal Fidelity Measures, IEEE Signal Processing Magazine, vol.26, no.1, 2009, pp.98-117

The authors list the implicit assumptions when using MSE, which I quote below:

1) Signal fidelity is independent of temporal or spatial relationships between the samples of the original signal. In other words, if the original and distorted signals are randomly re-ordered in the same way, then the MSE between them will be unchanged.

2) Signal fidelity is independent of any relationship between the original signal and the error signal. For a given error signal, the MSE remains unchanged, regardless of which original signal it is added to.

3) Signal fidelity is independent of the signs of the error signal samples.

4) All signal samples are equally important to signal fidelity. 


Obviously, when we measure the recovery quality of structured signals, the above assumptions are violated.

The authors give a number of nice examples. Here is one of them:


(a) is the original image, and (b)-(d) are three images added noise. (b),(c),(d) almost have the same MSE, but clearly their recovery quality is different. MSE fails to show such difference. In contrast, the other two measurement indexes, SSIM and CW-SSIM, express well the difference.

SSIM, standing for Structural SIMilarity (SSIM) index, is proposed for structured signals, especially images. The basic form of SSIM (measuring small patches of an image) is:
which measures the similarities of three aspects of the image patches: the similarity l(x,y) of the local patch luminances (brightness values), the similarity c(x,y) of the local patch contrasts, and the similarity s(x,y) of the local patch structures. The SSIM index is computed locally within a sliding window that moves pixel-by-pixel across the image. The SSIM score of the entire image is then computed by simple averaging the SSIM values across the image.

There are many variants of the basic SSIM index. Interested people can read the paper and the references cited.

The codes of computing SSIM can be found here: https://ece.uwaterloo.ca/~z70wang/research/ssim/

Note that the SSIM index can be easily modified to measure 1-D structured signals.

Added:
Igor has two posts on SSIM in his Nuit Blanche
http://nuit-blanche.blogspot.com/2011/11/randomized-thoughts-and-feedbacks.html

And here is the most recent improvement on SSIM:

CALIBRATING MS-SSIM FOR COMPRESSION DISTORTIONS USING MLDS, by C. Charrier, K. Knoblauch, L. T. Maloney and A. C. Bovik, ICIP 2011.

Thursday, March 1, 2012

A paper has been accepted by CVPR 2012

We have a new paper just accepted by CVPR 2012:

Sparse Bayesian Multi-Task Learning for Predicting Cognitive Outcomes from Neuroimaging Measures in Alzheimer's Disease.

This study proposed a sparse Bayesian multi-task learning algorithm to improve the prediction accuracy on the cognitive outcomes from neuroimaging measures in Alzheimer's disease. A variant of T-MSBL was proposed, and its connection to existing algorithms in this field was established, showing the advantages of the T-MSBL family. We achieved the highest prediction accuracy, compared to the latest results published in top journals in 2011.

I will introduce the paper in details in my next post. The camera-ready can be downloaded from here, and the code will be posted soon.