Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abd-Krim Seghouane is active.

Publication


Featured researches published by Abd-Krim Seghouane.


IEEE Transactions on Neural Networks | 2007

The AIC Criterion and Symmetrizing the Kullback–Leibler Divergence

Abd-Krim Seghouane; Shun-ichi Amari

The Akaike information criterion (AIC) is a widely used tool for model selection. AIC is derived as an asymptotically unbiased estimator of a function used for ranking candidate models which is a variant of the Kullback-Leibler divergence between the true model and the approximating candidate model. Despite the Kullback-Leiblers computational and theoretical advantages, what can become inconvenient in model selection applications is their lack of symmetry. Simple examples can show that reversing the role of the arguments in the Kullback-Leibler divergence can yield substantially different results. In this paper, three new functions for ranking candidate models are proposed. These functions are constructed by symmetrizing the Kullback-Leibler divergence between the true model and the approximating candidate model. The operations used for symmetrizing are the average, geometric, and harmonic means. It is found that the original AIC criterion is an asymptotically unbiased estimator of these three different functions. Using one of these proposed ranking functions, an example of new bias correction to AIC is derived for univariate linear regression models. A simulation study based on polynomial regression is provided to compare the different proposed ranking functions with AIC and the new derived correction with AICc


Neural Computation | 2012

Identification of directed influence: Granger causality, kullback-leibler divergence, and complexity

Abd-Krim Seghouane; Shun-ichi Amari

Detecting and characterizing causal interdependencies and couplings between different activated brain areas from functional neuroimage time series measurements of their activity constitutes a significant step toward understanding the process of brain functions. In this letter, we make the simple point that all current statistics used to make inferences about directed influences in functional neuroimage time series are variants of the same underlying quantity. This includes directed transfer entropy, transinformation, Kullback-Leibler formulations, conditional mutual information, and Granger causality. Crucially, in the case of autoregressive modeling, the underlying quantity is the likelihood ratio that compares models with and without directed influences from the past when modeling the influence of one time series on another. This framework is also used to derive the relation between these measures of directed influence and the complexity or the order of directed influence. These results provide a framework for unifying the Kullback-Leibler divergence, Granger causality, and the complexity of directed influence.


IEEE Transactions on Medical Imaging | 2012

HRF Estimation in fMRI Data With an Unknown Drift Matrix by Iterative Minimization of the Kullback–Leibler Divergence

Abd-Krim Seghouane; Adnan Shah

Hemodynamic response function (HRF) estimation in noisy functional magnetic resonance imaging (fMRI) plays an important role when investigating the temporal dynamic of a brain region response during activations. Nonparametric methods which allow more flexibility in the estimation by inferring the HRF at each time sample have provided improved performance in comparison to the parametric methods. In this paper, the mixed-effects model is used to derive a new algorithm for nonparametric maximum likelihood HRF estimation. In this model, the random effect is used to better account for the variability of the drift. Contrary to the usual approaches, the proposed algorithm has the benefit of considering an unknown and therefore flexible drift matrix. This allows the effective representation of a broader class of drift signals and therefore the reduction of the error in approximating the drift component. Estimates of the HRF and the hyperparameters are derived by iterative minimization of the Kullback-Leibler divergence between a model family of probability distributions defined using the mixed-effects model and a desired family of probability distributions constrained to be concentrated on the observed data. The performance of proposed method is demonstrated on simulated and real fMRI data, the latter originating from both event-related and block design fMRI experiments.


IEEE Transactions on Circuits and Systems | 2006

Vector Autoregressive Model-Order Selection From Finite Samples Using Kullback's Symmetric Divergence

Abd-Krim Seghouane

In this paper, a new small-sample model selection criterion for vector autoregressive (VAR) models is developed. The proposed criterion is named Kullback information criterion (KICvc), where the notation vc stands for vector correction, and it can be considered as an extension of the KIC, for VAR models. KICvc adjusts KIC to be an unbiased estimator for the variant of the Kullback symmetric divergence, assuming that the true model is correctly specified or overfitted. Furthermore, KICvc provides better VAR model-order choices than KIC in small samples. Simulation results show that the proposed criterion selects the model order more accurately than other asymptotically efficient methods when applied to VAR model selection in small samples. As a result, KICvc serves as an effective tool for selecting a VAR model of appropriate order. A theoretical justification of the proposed criterion is presented


IEEE Transactions on Neural Networks | 2009

Model Selection Criteria for Image Restoration

Abd-Krim Seghouane

In this brief, the image restoration problem is approached as a learning system problem, in which a model is to be selected and parameters are estimated. Although the parameters which correspond to the restored image can easily be obtained, their quality depend heavily on a proper choice of the regularization parameter that controls the tradeoff between fidelity to the blurred noisy observed image and the smoothness of the restored image. By analogy between the model selection philosophy that constitutes a fundamental task in systems learning and the choice of the regularization parameter, two criteria are proposed in this brief for selecting the regularization parameter. These criteria are based on Bayesian arguments and the Kullback-Leibler divergence and they can be considered as extensions of the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) for the image restoration problem.


Signal Processing | 2007

Fast communication: Bayesian estimation of the number of principal components

Abd-Krim Seghouane; Andrzej Cichocki

Recently, the technique of principal component analysis (PCA) has been expressed as the maximum likelihood solution for a generative latent variable model. A central issue in PCA is choosing the number of principal components to retain. This can be considered as a problem of model selection. In this paper, the probabilistic reformulation of PCA is used as a basis for a Bayasian approach of PCA to derive a model selection criterion for determining the true dimensionality of data. The proposed criterion is similar to the Bayesian Information Criterion, BIC, with a particular goodness of fit term and it is consistent. A simulation example that illustrates its performance for the determination of the number of principal components to be retained is presented.


international conference on acoustics, speech, and signal processing | 2003

A small sample model selection criterion based on Kullback's symmetric divergence

Abd-Krim Seghouane; Maïza Bekara; Gilles Fleury

The Kullback information criterion (KIC) is a recently developed tool for statistical model selection (Cavanaugh, J.E., Statistics and Probability Letters, vol.42, p.333-43, 1999). KIC serves as an asymptotically unbiased estimator of a variant of the Kullback symmetric divergence, known also as J-divergence. A bias correction of the Kullback symmetric information criterion is derived for linear models. The correction is of particular use when the sample size is small or when the number of fitted parameters is of a moderate to large fraction of the sample size. For linear regression models, the corrected method, called KICc, is an exactly unbiased estimator of a variant of the Kullback symmetric divergence between the true unknown model and the candidate fitted model. Furthermore, KICc is found to provide better model order choice than any other asymptotically efficient methods when applied to autoregressive time series models.


international conference on acoustics, speech, and signal processing | 2015

A sequential dictionary learning algorithm with enforced sparsity

Abd-Krim Seghouane; Muhammad Hanif

Dictionary learning algorithms have received widespread acceptance when it comes to data analysis and signal representations problems. These algorithms alternate between two stages: the sparse coding stage and dictionary update stage. In all existing dictionary learning algorithms the use of sparsity has been limited to the sparse coding stage while presenting differences in the dictionary update stage which can be achieved sequentially or in parallel. The singular value decomposition (SVD) has been successfully used for sequential dictionary update. In this paper we propose a dictionary learning algorithm that include a sparsity constraint also in the dictionary update stage. The cost function used to include sparsity in the dictionary update stage is derived using the link between SVD and rank one matrix approximation. The effectiveness of the proposed dictionary learning method is tested on synthetic data and an image processing application. The results reveal that including a sparsity constraint in the dictionary update stage is not a bad idea.


international symposium on biomedical imaging | 2013

Improving functional connectivity detection in FMRI by combining sparse dictionary learning and canonical correlation analysis

Muhammad Usman Khalid; Abd-Krim Seghouane

In this paper a novel framework that combines data-driven methods is proposed for functional connectivity analysis of functional magnetic resonance imaging (fMRI) data. The basic idea is to overcome the shortcomings of compressed sensing based data-driven method by incorporating canonical correlation analysis (CCA) to extract a more meaningful temporal profile that is based solely on underlying brain hemodynamics, which can be further investigated to detect functional connectivity using regression analysis. We apply our method on synthetic and task-related fMRI data to show that the combined framework which better adapts to individual variations of distinct activity patterns in the brain is an effective approach to reveal functionally connected brain regions.


IEEE Transactions on Image Processing | 2011

A Kullback–Leibler Divergence Approach to Blind Image Restoration

Abd-Krim Seghouane

A new algorithm for maximum-likelihood blind image restoration is presented in this paper. It is obtained by modeling the original image and the additive noise as multivariate Gaussian processes with unknown covariance matrices. The blurring process is specified by its point spread function, which is also unknown. Estimations of the original image and the blur are derived by alternating minimization of the Kullback-Leibler divergence between a model family of probability distributions defined using the linear image degradation model and a desired family of probability distributions constrained to be concentrated on the observed data. The algorithm presents the advantage to provide closed form expressions for the parameters to be updated and to converge only after few iterations. A simulation example that illustrates the effectiveness of the proposed algorithm is presented.

Collaboration


Dive into the Abd-Krim Seghouane's collaboration.

Top Co-Authors

Avatar

Adnan Shah

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Muhammad Usman Khalid

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Asif Iqbal

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Ju Lynn Ong

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Muhammad Hanif

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Chee Ming Ting

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Yousef Saad

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge