Nidhal Bouaynaya
Rowan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nidhal Bouaynaya.
IEEE Transactions on Image Processing | 2006
Nidhal Bouaynaya; Mohammed Charif-Chefchaouni; Dan Schonfeld
The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts
international conference on acoustics, speech, and signal processing | 2005
Nidhal Bouaynaya; Wei Qu; Dan Schonfeld
The particle filtering framework has revolutionized probabilistic tracking of objects in a video sequence. In this framework, the proposal density can be any density as long as its support includes that of the posterior. However, in practice, the number of samples is finite and consequently the choice of the proposal is crucial to the effectiveness of the tracking. The CONDENSATION filter uses the transition prior as the proposal density. We propose in this paper a motion-based proposal. We use adaptive block matching (ABM) as the motion estimation technique. The benefits of this model are two fold. It increases the sampling efficiency and handles abrupt motion changes. Analytically, we derive a Kullback-Leibler (KL)-based performance measure and show that the motion proposal is superior to the proposal of the CONDENSATION filter. Our experiments are applied to head tracking. Finally, we report promising tracking results in complex environments.
IEEE Signal Processing Letters | 2006
Dan Schonfeld; Nidhal Bouaynaya
We derive a new method for multidimensional dynamic programming using the inclusion-exclusion principle. We subsequently propose an extension of the Viterbi algorithm to semi-causal, multidimensional functions. This approach is based on extension of the 1-D trellis structure of the Viterbi algorithm to a tree structure in higher dimensions. We apply the dynamic tree programming algorithm to active surface extraction in video sequences. Simulation results show the efficiency and robustness of the proposed approach
conference on image and video communications and processing | 2005
Nidhal Bouaynaya; Dan Schonfeld
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2016
Ghulam Rasool; Kamran Iqbal; Nidhal Bouaynaya; Gannon A. White
We present a novel formulation that employs task-specific muscle synergies and state-space representation of neural signals to tackle the challenging myoelectric control problem for lower arm prostheses. The proposed framework incorporates information about muscle configurations, e.g., muscles acting synergistically or in agonist/antagonist pairs, using the hypothesis of muscle synergies. The synergy activation coefficients are modeled as the latent system state and are estimated using a constrained Kalman filter. These task-dependent synergy activation coefficients are estimated in real-time from the electromyogram (EMG) data and are used to discriminate between various tasks. The task discrimination is helped by a post-processing algorithm that uses posterior probabilities. The proposed algorithm is robust as well as computationally efficient, yielding a decision with >90% discrimination accuracy in approximately 3 ms. The real-time performance and controllability of the algorithm were evaluated using the targeted achievement control (TAC) test. The proposed algorithm outperformed common machine learning algorithms for single- as well as multi-degree-of-freedom (DOF) tasks in both off-line discrimination accuracy and real-time controllability (p <; 0.01).
visual communications and image processing | 2006
Nidhal Bouaynaya; Dan Schonfeld
Originally, mathematical morphology was a theory of signal transformations which are invariant under Euclidean translations. An interest in the extension of mathematical morphology to spatially-variant (SV) operators has emerged due to the requirements imposed by numerous applications in adaptive signal (image) processing. This paper presents a general theory of spatially-variant mathematical morphology in the Euclidean space. We define the binary and gray-level spatially-variant basic morphological operators (i.e., erosion, dilation, opening and closing) and study their properties. We subsequently derive kernel representations for a large class of binary and gray-level SV operators in terms of the basic SV morphological operators. The theory of SV mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we obtain new realizations of adaptive median filters in terms of the basic SV morphological operators. For skeleton representation, we develop an algorithm to construct the optimal structuring elements, in the sense of minimizing the cardinality of the spatially-variant morphological skeleton representation. Experimental results show the power of the proposed theory of spatially-variant mathematical morphology in practical image processing applications.
global communications conference | 2011
Yasir Rahmatallah; Nidhal Bouaynaya; Seshadri Mohan
This paper provides an analytical framework to study the performance of linear companding techniques proposed in the OFDM literature, thus settling the numerous controversial claims that are based solely on simulation results. Linear companding transforms are widely employed to reduce the peak-to- average-power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. Two main linear companding classes have been considered in the literature: linear symmetrical transform (LST) and linear asymmetrical transform (LAST). In the literature, the bit error rate (BER) performance superiority of the basic LAST (with one discontinuity point) over the LST is claimed based on computer simulations. Also, it has been claimed that a LAST with two discontinuity points outperforms the basic LAST with one discontinuity point. These claims are however not substantiated with analytical results. Our analysis shows that these claims are, in general, not always true. We derive a sufficient condition, in terms of the companding parameters, under which the BER performance of a general LAST with M-1 discontinuity points is superior to that of LST. The derived condition explains the contradictions between different reported results in the literature and validates some other reported simulation results. It also serves as a guideline in the process of choosing proper values for companding parameters to obtain a specific trade-off between PAPR reduction capability and BER performance. In particular, the derived sufficient condition shows that the BER performance for LAST depends on the slopes of the LAST rather than on the number of discontinuity points as has been indicated so far. Moreover, we derive conditions for the companding parameters in order to keep the average transmitted power unchanged after companding. Our theoretical derivations are supported by simulation results.
Bioinformatics | 2011
Nidhal Bouaynaya; Roman Shterenberg; Dan Schonfeld
MOTIVATION Analysis and intervention in the dynamics of gene regulatory networks is at the heart of emerging efforts in the development of modern treatment of numerous ailments including cancer. The ultimate goal is to develop methods to intervene in the function of living organisms in order to drive cells away from a malignant state into a benign form. A serious limitation of much of the previous work in cancer network analysis is the use of external control, which requires intervention at each time step, for an indefinite time interval. This is in sharp contrast to the proposed approach, which relies on the solution of an inverse perturbation problem to introduce a one-time intervention in the structure of regulatory networks. This isolated intervention transforms the steady-state distribution of the dynamic system to the desired steady-state distribution. RESULTS We formulate the optimal intervention problem in gene regulatory networks as a minimal perturbation of the network in order to force it to converge to a desired steady-state distribution of gene regulation. We cast optimal intervention in gene regulation as a convex optimization problem, thus providing a globally optimal solution which can be efficiently computed using standard toolboxes for convex optimization. The criteria adopted for optimality is chosen to minimize potential adverse effects as a consequence of the intervention strategy. We consider a perturbation that minimizes (i) the overall energy of change between the original and controlled networks and (ii) the time needed to reach the desired steady-state distribution of gene regulation. Furthermore, we show that there is an inherent trade-off between minimizing the energy of the perturbation and the convergence rate to the desired distribution. We apply the proposed control to the human melanoma gene regulatory network. AVAILABILITY The MATLAB code for optimal intervention in gene regulatory networks can be found online: http://syen.ualr.edu/nxbouaynaya/Bioinformatics2010.html.
international conference on signal processing | 2010
Jerzy Zielinski; Nidhal Bouaynaya; Dan Schonfeld
Computer aided diagnosis (CAD) paradigms have gained currency for discriminating malignant from benign lesions in ultrasound breast images. But even the most sophisticated investigators often rely on one-dimensional representations of the image in terms of its scanlines. Such vector representations are convenient because of the mathematical tractability of one-dimensional time-series. However, they fail to take into account the spatial correlations between the pixels, which is crucial in tumor detection and classification in breast images. In this paper, we propose a CAD system for tumor detection and classification (cancerous v.s. benign) in ultrasound breast images based on a two-dimensional Auto-Regressive-Moving-Average (ARMA) model of the breast image. First, we show, using the Wold decomposition theorem, that ultrasound breast images can be accurately modeled by two-dimensional ARMA random fields. As in the 1D case, the 2D ARMA parameter estimation problem is much more difficult than its 2D AR counterpart, due to the non-linearity in estimating the 2D moving average (MA) parameters. We propose to estimate the 2D ARMA parameters using a two-stage Yule-Walker Least-Squares algorithm. The estimated parameters are then used as the basis for statistical inference and biophysical interpretation of the breast image. We evaluate the performance of the 2D ARMA vector features in real ultrasound images using a k-means classifier. Our results suggest that the proposed CAD system based on a two-dimensional ARMA model leads to parameters that can accurately segment the ultrasound breast image into three regions: healthy tissue, benign tumor, and cancerous tumor. Moreover, the specificity and sensitivity of the proposed two-dimensional CAD system is superior to its one-dimensional homologue.
Journal of Bioinformatics and Computational Biology | 2014
Belhassen Bayar; Nidhal Bouaynaya; Roman Shterenberg
Non-negative matrix factorization (NMF) has proven to be a useful decomposition technique for multivariate data, where the non-negativity constraint is necessary to have a meaningful physical interpretation. NMF reduces the dimensionality of non-negative data by decomposing it into two smaller non-negative factors with physical interpretation for class discovery. The NMF algorithm, however, assumes a deterministic framework. In particular, the effect of the data noise on the stability of the factorization and the convergence of the algorithm are unknown. Collected data, on the other hand, is stochastic in nature due to measurement noise and sometimes inherent variability in the physical process. This paper presents new theoretical and applied developments to the problem of non-negative matrix factorization (NMF). First, we generalize the deterministic NMF algorithm to include a general class of update rules that converges towards an optimal non-negative factorization. Second, we extend the NMF framework to the probabilistic case (PNMF). We show that the Maximum a posteriori (MAP) estimate of the non-negative factors is the solution to a weighted regularized non-negative matrix factorization problem. We subsequently derive update rules that converge towards an optimal solution. Third, we apply the PNMF to cluster and classify DNA microarrays data. The proposed PNMF is shown to outperform the deterministic NMF and the sparse NMF algorithms in clustering stability and classification accuracy.