Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Khaled A. Mayyas is active.

Publication


Featured researches published by Khaled A. Mayyas.


IEEE Transactions on Signal Processing | 1997

A robust variable step-size LMS-type algorithm: analysis and simulations

Tyseer Aboulnasr; Khaled A. Mayyas

A number of time-varying step-size algorithms have been proposed to enhance the performance of the conventional LMS algorithm. Experimentation with these algorithms indicates that their performance is highly sensitive to the noise disturbance. This paper presents a robust variable step-size LMS-type algorithm providing fast convergence at early stages of adaptation while ensuring small final misadjustment. The performance of the algorithm is not affected by existing uncorrelated noise disturbances. An approximate analysis of convergence and steady-state performance for zero-mean stationary Gaussian inputs and for nonstationary optimal weight vector is provided. Simulation results comparing the proposed algorithm to current variable step-size algorithms clearly indicate its superior performance for cases of stationary environments. For nonstationary environments, our algorithm performs as well as other variable step-size algorithms in providing performance equivalent to that of the regular LMS algorithm.


IEEE Transactions on Signal Processing | 1999

Complexity reduction of the NLMS algorithm via selective coefficient update

Tyseer Aboulnasr; Khaled A. Mayyas

This article proposes an algorithm for partial update of the coefficients of the normalized least mean square (NLMS) finite impulse response (FIR) adaptive filter. It is shown that while the proposed algorithm reduces the complexity of the adaptive filter, it maintains the closest performance to the full update NLMS filter for a given number of updates. Analysis of the MSE convergence and steady-state performance for independent and identically distributed (i.i.d.) signals is provided for the extreme case of one update/iteration.


international conference on acoustics, speech, and signal processing | 1995

A robust variable step size LMS-type algorithm: analysis and simulations

Khaled A. Mayyas; Tyseer Aboulnasr

The paper presents a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation. The performance of the algorithm is not affected by the presence of noise. Approximate analysis of convergence and steady state performance for zero-mean stationary Gaussian inputs and a nonstationary optimal weight vector is provided. Simulation results clearly indicate its superior performance for stationary cases. For the nonstationary environment, the algorithm provides performance equivalent to that of the regular LMS algorithm.


IEEE Transactions on Signal Processing | 1997

Leaky LMS algorithm: MSE analysis for Gaussian data

Khaled A. Mayyas; Tyseer Aboulnasr

Despite the widespread usage of the leaky LMS algorithm, there has been no detailed study of its performance. This paper presents an analytical treatment of the mean-square error (MSE) performance for the leaky LMS adaptive algorithm for Gaussian input data. The common independence assumption regarding W(n) and X(n) is also used. Exact expressions that completely characterize the second moment of the coefficient vector and algorithm steady-state excess MSE are developed. Rigorous conditions for MSE convergence are also established. Analytical results are compared with simulation and are shown to agree well.


IEEE Transactions on Signal Processing | 2005

Performance analysis of the deficient length LMS adaptive algorithm

Khaled A. Mayyas

In almost all analyzes of the least mean-square (LMS) finite impulse response (FIR) adaptive algorithm, it is assumed that the length of the adaptive filter is equal to that of the unknown system impulse response. However, in many practical situations, a deficient length adaptive filter, whose length is less than that of the unknown system, is employed, and analysis results for the sufficient length LMS algorithm are not necessarily applicable to the deficient length case. Therefore, there is an essential need to accurately quantify the behavior of the LMS algorithm for realistic situations where the length of the adaptive filter is deficient. In this paper, we present a performance analysis for the deficient length LMS adaptive algorithm for correlated Gaussian input data and using the common independence assumption. Exact expressions that completely characterize the transient and steady-state mean-square performances of the algorithm are developed, which lead to new insights into the statistical behavior of the deficient length LMS algorithm. Simulation experiments illustrate the accuracy of the theoretical results in predicting the convergence behavior of the algorithm.


international conference on acoustics, speech, and signal processing | 1997

Selective coefficient update of gradient-based adaptive algorithms

Tyseer Aboulnasr; Khaled A. Mayyas

One common approach to reducing the computational overhead of the normalized LMS (NLMS) algorithm is to update a subset of the adaptive filter coefficients. It is known that the mean square error (MSE) is not equally sensitive to the variations of the coefficients. Accordingly, the choice of the coefficients to be updated becomes crucial. On this basis, we propose an algorithm that belongs to the same family but selects at each iteration a specific subset of the coefficients that will result in the largest reduction in the performance error. The proposed algorithm reduces the complexity of the NLMS algorithm, as do the current algorithms from the same family, while maintaining a performance close to the full update NLMS algorithm specifically for correlated inputs.


Journal of The Franklin Institute-engineering and Applied Mathematics | 2011

An LMS adaptive algorithm with a new step-size control equation

Khaled A. Mayyas; Fadi Momani

Abstract In this paper, we introduce a new variable step-size LMS (VSSLMS) adaptive algorithm. The algorithm step-size equations estimate an optimal derived step-size and are controlled by only one parameter. Mean-square performance analysis is provided for zero-mean stationary Gaussian input signal, and a simple expression that predicts the algorithm steady state misadjustment is derived for small step-size fluctuations. The algorithm is compared with other well-known VSSLMS algorithms through simulation experiments, which demonstrate the performance advantages of the proposed algorithm over these algorithms.


Digital Signal Processing | 2010

A variable step-size affine projection algorithm

Khaled A. Mayyas

In this paper, we propose a new time-varying step-size for the affine projection (AP) algorithm based on the minimization of the mean-square error (MSE) at each time instant. The step-size is dependent on accessible quantities and, therefore, does not need approximation. In addition, we show how the new step-size control equations can be incorporated into the fast AP (FAP) algorithm leading to a reduced complexity implementation of the proposed algorithm. The algorithm improved performance characteristics are verified by simulation experiments.


Digital Signal Processing | 2013

A variable step-size selective partial update LMS algorithm

Khaled A. Mayyas

Selective partial update of the adaptive filter coefficients has been a popular method for reducing the computational complexity of least mean-square (LMS)-type adaptive algorithms. These algorithms use a fixed step-size that forces a performance compromise between fast convergence speed and small steady state misadjustment. This paper proposes a variable step-size (VSS) selective partial update LMS algorithm, where the VSS is an approximation of an optimal derived one. The VSS equations are controlled by only one parameter, and do not require any a priori information about the statistics of the system environment. Mean-square performance analysis will be provided for independent and identically distributed (i.i.d.) input signals, and an expression for the algorithm steady state excess mean-square error (MSE) will be presented. Simulation experiments are conducted to compare the proposed algorithm with existing full-update VSS LMS algorithms, which indicate that the proposed algorithm performs as well as these algorithms while requiring less computational complexity.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2004

Reduced-complexity transform-domain adaptive algorithm with selective coefficient update

Khaled A. Mayyas; Tyseer Aboulnasr

This paper proposes a new low-complexity transform-domain (TD) adaptive algorithm for acoustic echo cancellation. The algorithm is based on decomposing the long adaptive filter into smaller subfilters and employing the selective coefficient update (SCU) approach in each subfilter to reduce computational complexity. The resulting algorithm combines the fast converging characteristic of the TD decomposition technique and the benefits of the SCU of low complexity with minimal performance losses. The improvement in convergence speed comes at the expense of a corresponding increase in misadjustment. To overcome this problem, a hybrid of the proposed algorithm and the standard TD LMS algorithm (TDLMS) is presented. The hybrid algorithm retains the fast convergence speed capabilities of the original algorithm while allowing for low final MSE. Simulations show that the hybrid algorithm offers a superior performance when compared to the standard TDLMS algorithm with less computational overhead.

Collaboration


Dive into the Khaled A. Mayyas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taisir Eldos

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Khasawneh

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Fadi Momani

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hani Abu-Seba

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mohammed A. Khasawneh

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Monther I. Haddad

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge