Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nasser Mourad is active.

Publication


Featured researches published by Nasser Mourad.


IEEE Transactions on Signal Processing | 2010

Minimizing Nonconvex Functions for Sparse Vector Reconstruction

Nasser Mourad; James P. Reilly

In this paper, we develop a novel methodology for minimizing a class of nonconvex (concave on the non-negative orthant) functions for solving an underdetermined linear system of equations As = x when the solution vector s is known a priori to be sparse. The proposed technique is based on locally replacing the original objective function by a quadratic convex function which is easily minimized. The resulting algorithm is iterative and is absolutely converging to a fixed point of the original objective function. For a certain selection of convex objective functions, the class of algorithms called iterative reweighted least squares (IRLS) is shown to be a special case of the proposed methodology. Thus, the proposed algorithms are a generalization and unification of the previous methods. In addition, we also propose a new class of algorithms with better convergence properties compared to the regular IRLS algorithms and, hence, can be considered as enhancements to these algorithms. Since the original objective functions are nonconvex, the proposed algorithm is susceptible to convergence to a local minimum. To alleviate this difficulty, we propose a random perturbation technique that enhances the performance of the proposed algorithm. The numerical results show that the proposed algorithms outperform some of the well-known algorithms that are usually utilized for solving the same problem.


Clinical Neurophysiology | 2011

Comparison of artifact correction methods for infant EEG applied to extraction of event-related potential signals

Takako Fujioka; Nasser Mourad; Chao He; Laurel J. Trainor

OBJECTIVE EEG recording is useful for neurological and cognitive assessment, but acquiring reliable data in infants and special populations has the challenges of limited recording time, high-amplitude background activity, and movement-related artifacts. This study objectively evaluated our previously proposed ERP analysis techniques. METHODS We compared three artifact removal techniques: Conventional Trial Rejection (CTR), Independent Channel Rejection (ICR; He et al., 2007), and Artifact Blocking (AB; Mourad et al., 2007). We embedded a synthesized auditory ERP signal into real EEG activity recorded from 4-month-old infants. We then compared the ability of the three techniques to extract that signal from the noise. RESULTS Examination of correlation coefficients, variance in the gain across sensors, and residual power revealed that ICR and AB were significantly more successful than CTR at accurately extracting the signal. Overall performance of ICR and AB was comparable, although the AB algorithm introduced less spatial distortion than ICR. CONCLUSIONS ICR and AB are improvements over CTR in cases where the signal-to-noise ratio is low. SIGNIFICANCE Both ICR and AB are improvements over standard techniques. AB can be applied to both continuous and epoched EEG.


international conference on acoustics, speech, and signal processing | 2007

A Simple and Fast Algorithm for Automatic Suppression of High-Amplitude Artifacts in EEG Data

Nasser Mourad; James P. Reilly; H. de Bruin; G. Hasey; Duncan J. MacCrimmon

In this paper we present a simple and fast technique for correcting high amplitude artifacts that contaminate EEG signals. Examples of such artifacts are ocular movement, eye blinks, head movement, etc. Since the measured EEG data can be modeled as a linear combination of brain sources and artifacts, the proposed technique is based on multiplying the observed data matrix by a blocking matrix that has the effect of blocking high amplitude artifacts, while linearly transforming the other sources without any distortion. The advantages of using this technique are: 1) it is relatively fast, so it can be applied in real time, 2) it is completely automatic, and 3) can be successfully applied to signals which fail with ICA-based algorithms.


European Journal of Neuroscience | 2011

Development of auditory-specific brain rhythm in infants.

Takako Fujioka; Nasser Mourad; Laurel J. Trainor

Human infants rapidly develop their auditory perceptual abilities and acquire culture‐specific knowledge in speech and music in the second 6 months of life. In the adult brain, neural rhythm around 10 Hz in the temporal lobes is thought to reflect sound analysis and subsequent cognitive processes such as memory and attention. To study when and how such rhythm emerges in infancy, we examined electroencephaolgram (EEG) recordings in infants 4 and 12 months of age during sound stimulation and silence. In the 4‐month‐olds, the amplitudes of narrowly tuned 4‐Hz brain rhythm, recorded from bilateral temporal electrodes, were modulated by sound stimuli. In the 12‐month‐olds, the sound‐induced modulation occurred at faster 6‐Hz rhythm at temporofrontal locations. The brain rhythms in the older infants consisted of more complex components, as even evident in individual data. These findings suggest that auditory‐specific rhythmic neural activity, which is already established before 6 months of age, involves more speed‐efficient long‐range neural networks by the age of 12 months when long‐term memory for native phoneme representation and for musical rhythmic features is formed. We suggest that maturation of distinct rhythmic components occurs in parallel, and that sensory‐specific functions bound to particular thalamo‐cortical networks are transferred to newly developed higher‐order networks step by step until adult hierarchical neural oscillatory mechanisms are achieved across the whole brain.


international conference on acoustics, speech, and signal processing | 2009

ℓ P minimization for sparse vector reconstruction

Nasser Mourad; James P. Reilly

In this paper we present a new technique for minimizing a class of nonconvex functions for solving the problem of under-determined systems of linear equations. The proposed technique is based on locally replacing the nonconvex objective function by a convex objective function. The main property of the utilized convex function is that it is minimized at a point that reduces the original concave function. The resulting algorithm is iterative and outperforms some previous algorithms that have been applied to the same problem.


Signal Processing | 2017

Majorization-minimization for blind source separation of sparse sources

Nasser Mourad; James P. Reilly; T. Kirubarajan

In this paper we propose the Majorization-Minimization Blind Spare Source Separation (MM-BSSS) algorithm for solving the blind source separation (BSS) problem when the source signals are known a priori to be sparse, or can be sparsely represented in some dictionary. The algorithm capitalizes on a previous result by Chartrand (2007 24) that shows certain classes of nonconvex functions perform better than the convex ź1-norm in measuring sparsity of a signal. In this paper we propose a majorization-minimization (MM) method for minimizing such a nonconvex objective function. The method can be simplified by choosing the non-convex function to be separable. In this paper we employ a previously developed technique (Mourad and Reilly, 2010 26) for constructing the MM surrogate function, which reduces the sparse BSS problem to an iterative computation of the minor eigenvectors of particular covariance matrices. These features permit a computationally efficient implementation. The proposed algorithm enjoys several advantages such as robustness to noise and the ability to estimate the number of source signals. Numerical results show that the proposed algorithm outperforms other well-known algorithms that solve the same problem. HighlightsIn this paper we propose a new algorithm for solving the blind source separation (BSS) problem when the source signals are known to be sparse, or can be sparsely represented in some dictionary.The algorithm capitalizes on a previous result that shows certain classes of nonconvex functions perform better than the convex l1-norm in measuring sparsity of a signal.In this paper we propose a majorization-minimization (MM) method for minimizing such a nonconvex objective function. The MM technique is based on locally replacing the original nonconvex function by a smooth convex function that can be efficiently minimized.We proof that the global minimum of the suggested surrogate function is guaranteed to reduce the value of the original nonconvex function.Following the proposed technique, the sparse BSS problem is reduced to an iterative computation of the minor eigenvectors of particular covariance matrices. These features permit a computationally efficient implementation.The proposed algorithm enjoys several advantages such as robustness to noise and the ability to estimate the number of source signals.Numerical results show that the proposed algorithm outperforms other well-known algorithms that solve the same problem.


international conference on acoustics, speech, and signal processing | 2010

Modified hierarchical clustering for sparse component analysis

Nasser Mourad; James P. Reilly

The under-determined blind source separation (BSS) problem is usually solved using the sparse component analysis (SCA) technique. In SCA, the BSS is usually solved in two steps, where the mixing matrix is estimated in the first step, while the sources are estimated in the second step. In this paper we propose a novel clustering algorithm for estimating the mixing matrix and the number of sources, which is usually unknown. The proposed algorithm is based on incorporating a statistical test with a hierarchical clustering (HC) algorithm. The proposed algorithm is based on sequentially extracting compact clusters that have been constructed by the HC algorithm, where the extraction decision is based on the statistical test. To identify the number of sources, as well as the clusters corresponding to the columns of the mixing matrix, we develop a quantitative measure called the concentration parameters. Two numerical examples are presented to present the ability of the proposed algorithm in estimating the mixing matrix and the number of sources.


international conference on acoustics, speech, and signal processing | 2010

Blind extraction of sparse sources

Nasser Mourad; James P. Reilly

In this paper we propose a new algorithm for solving the blind source extraction (BSE) problem when the desired source signals are sparse. Previous approaches for solving this problem are based on the independent component analysis (ICA ) technique, that extracts a source signal by finding a separating vector that maximizes the non-Gaussianity of the extracted source signal. These algorithms are general purpose algorithms and are not designed specifically for extracting sparse signals. In this paper we propose a new algorithm for extracting sparse source signals. The proposed algorithm is based on finding a separating vector that maximizes the sparsity of the extracted source signal. In the proposed algorithm, a nonconvex objective function that measures the sparsity of the separated signal is locally replaced by a quadratic convex function. This results in an iterative algorithm in which a new estimate of the separating vector is obtained by solving an eigenvalue decomposition problem. A numerical example is presented to investigate the superiority of the proposed algorithm in comparison with one of the well known ICA algorithm for extracting sparse sources. .


international conference on acoustics, speech, and signal processing | 2012

Automatic threshold estimation for Iterative Shrinkage Algorithms used with compressed sensing

Nasser Mourad; James P. Reilly

Recently, a new class of algorithms has been developed which iteratively build a sparse solution to an underdetermined linear system of equations. These algorithms are known in the literature as Iterative Shrinkage Algorithms (ISA). ISA algorithms depend on a thresholding parameter, which is usually provided by the user. In this paper we develop a new approach for automatically estimating this thresholding parameter. The proposed approach is general in a sense that it does not assume any distribution on the entries of the dictionary matrix, nor on the nonzero coefficients of the solution vector. In addition, the proposed approach is simple and can be adapted for use with newly evolving ISA algorithms. Moreover, the simulation results show that these proposed algorithms outperform their previous counterparts.


international conference on acoustics, speech, and signal processing | 2010

Temporally constrained SCA with applications to EEG data

Nasser Mourad; James P. Reilly; Gary Hasey; Duncan J. MacCrimmon

In this paper we propose an iterative algorithm for solving the problem of extracting a sparse source signal when a reference signal for the desired source signal is available. In the proposed algorithm, a nonconvex objective function is used for measuring the diversity (antisparsity) of the desired source signal. The nonconvex function is locally replaced by a quadratic convex function. This results in a simple iterative algorithm. The proposed algorithm has two different versions, depending on the measure of closeness between the extracted source signal and the reference signal. The proposed algorithm has useful applications to EEG/MEG signal processing. This is demonstrated by an example in which eye blink artifacts are automatically removed from a real EEG data.

Collaboration


Dive into the Nasser Mourad's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge