Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin T. Wagner is active.

Publication


Featured researches published by Kevin T. Wagner.


international conference on acoustics, speech, and signal processing | 2009

Gain allocation in proportionate-type NLMS algorithms for fast decay of output error at all times

Kevin T. Wagner; Milos Doroslovacki

In this paper, we propose three new proportionate-type NLMS algorithms: the water filling algorithm, the feasible water filling algorithm, and the adaptive μ-law proportionate NLMS (MPNLMS) algorithm. The water filling algorithm attempts to choose the optimal gains at each time step. The optimal gains are found by minimizing the mean square error (MSE) at each time with respect to the gains, given the previous mean square weight deviations. While this algorithm offers superior convergence times, it is not feasible. The second algorithm is a feasible version of the water filling algorithm. The adaptive MPNLMS (AMPNLMS) algorithm is a modification of the MPNLMS algorithm. In the MPNLMS algorithm, the parameter μ of the μ-law compression is constant. In the AMPNLMS algorithm the parameter μ is allowed to vary with time. This modification allows the algorithm more flexibility when attempting to minimize the MSE. Compared with several feasible algorithms, the AMPNLMS algorithm has the fastest MSE decay for almost all times.


IEEE Transactions on Signal Processing | 2011

Proportionate-Type Normalized Least Mean Square Algorithms With Gain Allocation Motivated by Mean-Square-Error Minimization for White Input

Kevin T. Wagner; Milos Doroslovacki

In the past, ad hoc methods have been used to choose gains in proportionate-type normalized least mean-square algorithms without strong theoretical under-pinnings. In this correspondence, a theoretical framework and motivation for adaptively choosing gains is presented, such that the mean-square error will be minimized at any given time. As a result of this approach, a new optimal proportionate-type normalized least mean-square algorithm is introduced. A computationally simplified version of the theoretical optimal algorithm is derived as well. Both of these algorithms require knowledge of the mean-square weight deviations. Feasible implementations, which estimate the mean-square weight deviations, are presented. The performance of these new feasible algorithms are compared to the performance of standard adaptive algorithms when operating with sparse, non-sparse, and time-varying impulse responses, when the input signal is white. Specifically, we consider the transient and steady-state mean-square errors as well as the overall computational complexity of each algorithm.


international conference on acoustics, speech, and signal processing | 2008

Towards analytical convergence analysis of proportionate-type nlms algorithms

Kevin T. Wagner; Miloš Doroslovački

To date no theoretical results have been developed to predict the performance of the proportionate normalized least mean square (PNLMS) algorithm or any of its cousin algorithms such as the mu-law PNLMS (MPNLMS), and the e-law PNLMS (EPNLMS). In this paper we develop an analytic approach to predicting the performance of the simplified PNLMS algorithm which is closely related to the PNLMS algorithm. In particular we demonstrate the ability to predict the mean square output error of the simplified PNLMS algorithm using our theory.


international conference on acoustics, speech, and signal processing | 2012

Complex proportionate-type normalized least mean square algorithms

Kevin T. Wagner; Milos Doroslovacki

A complex proportionate-type normalized least mean square algorithm is derived by minimizing the second norm of the weighted difference between the current estimate of the impulse response and the estimate at the next time step under the constraint that the adaptive filter a posteriori output is equal to the measured output. The weighting function is assumed positive but otherwise arbitrary and it is directly related to the update gains. No assumptions regarding the input signal are made during the derivation. Different weights (i.e., gains) are used for real and imaginary parts of the estimated impulse response. After additional assumptions special cases of the algorithm are obtained: the algorithm with one gain per impulse response coefficient and the algorithm with lower computational complexity. The learning curves of the algorithms are compared for several standard gain assignment laws for white and colored input. It was demonstrated that, in general, the algorithms with separate gains for real and imaginary parts have faster convergence.


international conference on acoustics, speech, and signal processing | 2011

Proportionate-type normalized least mean square algorithm with gain allocation motivated by minimization of mean-square-weight deviation for colored input

Kevin T. Wagner; Milos Doroslovacki

In previous work, a water-filling algorithm was proposed which sought to minimize the mean square error (MSE) at any given time by optimally choosing the gains (i.e. step-sizes) each time instance. This work relied on the assumption that the input signal was white. In this paper, an algorithm is derived which operates when the input signal is colored. The proposed algorithm minimizes the mean square weight deviation which is important in many applications such as system identification. Additionally, it is shown that by minimizing the mean square weight deviation, an upper bound on the MSE is also minimized. The proposed algorithm offers improved misalignment and learning curve convergence rates relative to other standard algorithms.


international conference on acoustics, speech, and signal processing | 2014

Combination coefficients for fastest convergence of distributed LMS estimation

Kevin T. Wagner; Milos Doroslovacki

Diffusion strategies for learning across networks which minimize the transient regime mean-square deviation across all nodes are presented. The problem of choosing combination coefficients which minimize the mean-square deviation at all given time instances results in a quadratic program with linear constraints. The implementation of the optimal procedure is based on the estimation of weight deviation vectors for which an algorithm is proposed. Additionally, the optimization that uses relaxed constraints is considered. The proposed methods were validated through simulations for different estimation distribution strategies and input signals. The results show a potential for significant improvement of the convergence speed.


asilomar conference on signals, systems and computers | 2015

Improving convergence of distributed LMS estimation by enabling propagation of good estimates through bad nodes

Kevin T. Wagner; Milos Doroslovacki

A noisy node that is the only passage between two parts of a network can obstruct propagation of a good estimate through the network. Assuming adapt-then-combine diffusion based least mean square algorithm that uses combiners minimizing the mean square weight deviations, we found a sufficient condition for mean square weight deviation convergence that also guarantees propagation of good estimates through the whole connected part of the network. A practical algorithmic implementation of this condition is developed and compared in performance with several known algorithms for a nontrivial network. The proposed algorithm demonstrates improved performance.


conference on information sciences and systems | 2011

Reduced computational complexity suboptimal gain allocation for proportionate-type NLMS algorithms with colored inputs

Kevin T. Wagner; Milos Doroslovacki

In this work, two suboptimal algorithms were introduced for gain allocation in the colored input case. Each algorithm offers a reduction in computational complexity by removing the sorting function needed in the original algorithm. For stationary colored input signals, the suboptimal algorithms offer improved MSE convergence performance relative to other standard algorithms such the NLMS, PNLMS, and MPNLMS. For non-stationary input signals the suboptimal algorithms do not provide improved MSE convergence performance relative to standard algorithms. The suboptimal algorithms parameters could potentially be tuned to provide better convergence performance.


asilomar conference on signals, systems and computers | 2006

Convergence of Proportionate-type LMS Adaptive Filters and Choice of Gain Matrix

Kevin T. Wagner; Milos Doroslovacki; Hongyang Deng

Recently, it has been shown that the proportionate- type LMS adaptive filters are converging for the sufficiently small adaptation stepsize parameter and white input. In addition to this, a theoretical connection between proportionate-type steepest descent algorithms and proportionate-type stochastic algorithms for a constant gain matrix has been revealed. Motivated by this theoretical connection, we seek a connection between these types of algorithms for a time-varying gain matrix. To that end, we examine the feasibility of predicting the performance of a stochastic proportionate algorithm with a time-varying gain matrix by analyzing the performance of its associated deterministic steepest descent algorithm. In doing so we have found that this approach has merit. Using this insight, various steepest descent algorithms are studied and used to predict and explain the behavior of their counterpart stochastic algorithms. In particular, it is shown that the mu-PNLMS algorithm possesses robust behavior. In addition to this the isin-PNLMS algorithm is proposed and its performance is evaluated.


conference on information sciences and systems | 2012

Comparisons of one and two adaptation gain complex PtNLMS algorithms

Kevin T. Wagner; Milos Doroslovacki

Learning curves of the complex proportionate-type normalized least mean square (CPtNLMS), simplified CPtNLMS, and one-gain CPtNLMS algorithms are compared for different complex input signals and complex impulse responses. Cases when the algorithms show different behaviors are presented. In addition the water-filling algorithm for optimal adaptation gain allocation is extended from real-valued signals to complex-valued signals. In fact, two complex water-filling algorithms are derived, namely, the complex water-filling with one gain for coefficient real and imaginary parts and the complex water-filling algorithm with separate gains for coefficient real and imaginary parts.

Collaboration


Dive into the Kevin T. Wagner's collaboration.

Top Co-Authors

Avatar

Milos Doroslovacki

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Miloš Doroslovački

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard C. Chen

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karl Gerlach

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge