Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajib Lochan Das is active.

Publication


Featured researches published by Rajib Lochan Das.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2015

On Convergence of Proportionate-Type Normalized Least Mean Square Algorithms

Rajib Lochan Das; Mrityunjoy Chakraborty

In this paper, a new convergence analysis is presented for a well-known sparse adaptive filter family, namely, the proportionate-type normalized least mean square (PtNLMS) algorithms, where, unlike all the existing approaches, no assumption of whiteness is made on the input. The analysis relies on a “transform” domain based model of the PtNLMS algorithms and brings out certain new convergence features not reported earlier. In particular, it establishes the universality of the steady-state excess mean square error formula derived earlier under white input assumption. In addition, it brings out a new relation between the mean square deviation of each tap weight and the corresponding gain factor used in the PtNLMS algorithm.


IEEE Signal Processing Letters | 2013

Improving the Bound on the RIP Constant in Generalized Orthogonal Matching Pursuit

Siddhartha Satpathi; Rajib Lochan Das; Mrityunjoy Chakraborty

The generalized Orthogonal Matching Pursuit (gOMP) is a recently proposed compressive sensing greedy recovery algorithm which generalizes the OMP algorithm by selecting N( ≥ 1) atoms in each iteration. In this letter, we demonstrate that the gOMP can successfully reconstruct a K-sparse signal from a compressed measurement y=Φx by a maximum of K iterations if the sensing matrix Φ satisfies the Restricted Isometry Property (RIP) of order NK, with the RIP constant δ<sub>NK</sub> satisfying δ<sub>NK</sub> <; √N/√K+2√N. The proposed bound is an improvement over the existing bound on δ<sub>NK</sub>. We also show that by increasing the RIP order just by one (i.e., NK+1 from NK), it is possible to refine the bound further to δ<sub>NK+1</sub> <; √N/√K+√N, which is consistent (for N=1) with the near optimal bound on δ<sub>K+1</sub> in OMP.


international symposium on circuits and systems | 2012

Sparse adaptive filters - An overview and some new results

Rajib Lochan Das; Mrityunjoy Chakraborty

In this paper, we provide an overview of the major developments in the area of sparse adaptive filters, starting from the celebrated works on PNLMS algorithm and its several variants to more recent approaches that use compressed sensing framework, more specifically, LASSO and basis pursuit or matching pursuit, to develop sparse adaptive algorithms with improved mean square error and tracking properties. Subsequently, we also present a new approach to identify sparse systems with time varying sparseness, for which a novel scheme of cooperative learning involving a PNLMS and a NLMS based adaptive filters is developed.


international symposium on circuits and systems | 2014

A variable step-size zero attracting proportionate normalized least mean square algorithm

Rajib Lochan Das; Mrityunjoy Chakraborty

The proportionate normalized least mean square (PNLMS) algorithm and its variants are by far the most popular adaptive filters that are used to identify sparse systems. The convergence speed of the PNLMS algorithm, though very high initially, however, slows down at a later stage, even becoming worse than sparsity agnostic adaptive filters like the NLMS. In this paper, we address this problem by introducing a carefully constructed l1 norm (of the coefficients) penalty in the PNLMS cost function which favors sparsity. This results in certain “zero attractor” terms in the PNLMS weight update equation which help in the shrinkage of the coefficients, especially the inactive taps, thereby arresting the slowing down of convergence and also producing lesser steady state excess mean square error (EMSE). We also demonstrate both analytically and also intuitively, that the EMSE can not, however, be reduced significantly by the zero attractors due to some fundamental shortcoming of the PNLMS algorithm, and propose methods to counter it by deploying a variable step size and also a variable proportionality constant for the zero attractors. Simulation results confirm excellent performance of the proposed algorithm vis-a-vis existing methods.


signal processing algorithms architectures arrangements and applications | 2016

Performance analysis of proportionate-type LMS algorithms

Vinay Chakravarthi Gogineni; Subrahmanyam Mula; Rajib Lochan Das; Mrityunjoy Chakraborty

For real-time sparse systems identification applications, Proportionate-type Least Mean Square (Pt-LMS) algorithms are often preferred to their normalized counterparts (Pt-NLMS) due to lower computational complexity of the former algorithms. In this paper, we present the convergence analysis of Pt-LMS algorithms. Without any assumptions on input, both first and second order convergence analysis are carried out and new convergence bounds are obtained. In particular, it establishes the universality of the steady-state mean square deviation. Detailed simulation results are presented to validate the analytical results.


asia-pacific signal and information processing association annual summit and conference | 2013

Sparse adaptive filtering by iterative hard thresholding

Rajib Lochan Das; Mrityunjoy Chakraborty

In this paper, we present a new algorithm for sparse adaptive filtering, drawing from the ideas of a greedy compressed sensing recovery technique called the iterative hard thresholding (IHT) and the concepts of affine projection. While usage of affine projection makes it robust against colored input, the use of IHT provides a remarkable improvement in convergence speed over the existing sparse adaptive algorithms. Further, the gains in performance are achieved with very little increase in computational complexity.


asia pacific signal and information processing association annual summit and conference | 2014

Proportionate-type hard thresholding adaptive filter for sparse system identification

Vinay Chakravarthi Gogineni; Rajib Lochan Das; Mrityunjoy Chakraborty

Recently proposed Hard Thresholding based Adaptive Filtering (HTAF) algorithm provides an on-line counterpart of a compressed sensing based greedy sparse recovery algorithm called iterative hard thresholding (IHT) by constructing a sliding-window based cost function. This leads to an adaptive algorithm with data reuse gradient term (i.e. with multi-regressors) followed by a fixed hard thresholding operator. The HTAF algorithm achieves both robustness against colored input (due to the data reuse in gradient update) and smaller steady state error (due to hard thresholding operator) while identifying a sparse system. In this paper, we propose a new sparse adaptive technique called Proportionate type Hard Thresholding Adaptive Filter (PtHTAF) using a proportionate-type gradient update followed by a variable hard thresholding operator. The proposed PtHTAF algorithm enjoys faster initial convergence rate (due to proportionate type gradient update) while maintaining low steady-state excess mean square error like the HTAF. Simulation results establish superiority of the proposed algorithm over existing sparse adaptive algorithms.


national conference on communications | 2013

Improving the performance of the LMS algorithm via cooperative learning

Rajib Lochan Das; Bijit Kumar Das; Mrityunjoy Chakraborty

Combination of two adaptive filters working in parallel for achieving better performance both in term of convergence speed and excess mean square error (EMSE) has been considered by several researchers in recent past. Prominent among these include convex combination (where combinational weight factors are within the range [0 1], while summing up to one), affine combination (where the combinational weight factors are free from any range constraint, while still summing up to one) and unconstrained model combination (where the output of constituent filters are combined using another adaptive algorithm). In this paper, we propose a novel way of using two adaptive filters for achieving better performance, using the cooperative learning approach. For this, we employ one LMS based adaptive filter that uses a larger step size and thus has a faster rate of convergence at the expense of higher EMSE. The other filter employed uses a modified version of the LMS algorithm, which employs a much lesser step size, but has one extra update term in the weight update relation that helps in learning from the faster filter its filter weight information. The learning takes place during the transient phase, while, in the steady state, two filters become almost independent of each other. Presence of the learning component in the weight update recursion enables the filter to converge much faster while a smaller step size ensures much less steady state EMSE. The claims are supported by theoretical as well as detailed simulation studies.


national conference on communications | 2013

Multi stage adaptive filter for identification of the systems with variable sparsity

Bijit Kumar Das; Rajib Lochan Das; Mrityunjoy Chakraborty

Adaptive identification of sparse systems is one of the popular adaptive signal processing topics due to its application in acoustic and network echo cancellation, adaptive channel estimation and several other areas. It has been observed that sometimes the amount of sparseness in the identifiable system impulse response can vary greatly depending on the nonstationary nature of the system. The compressive sensing based sparsity-aware adaptive algorithm performs satisfactorily in strongly sparse environment, but is shown to perform worse than the conventional ones when sparseness of the impulse response decreases. We propose an algorithm which works well both in sparse and non-sparse circumstances, and adapts dynamically to the level of sparseness using a dual stage adaptive filtering approach using an affine combination of the outputs of two single stage adaptive filters using two different algorithms. The proposed algorithm is supported by simulation results that show its robustness against variable sparsity.


asia pacific signal and information processing association annual summit and conference | 2012

A zero attracting proportionate normalized least mean square algorithm

Rajib Lochan Das; Mrityunjoy Chakraborty

Collaboration


Dive into the Rajib Lochan Das's collaboration.

Top Co-Authors

Avatar

Mrityunjoy Chakraborty

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Bijit Kumar Das

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Vinay Chakravarthi Gogineni

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Siddhartha Satpathi

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Subrahmanyam Mula

Indian Institute of Technology Kharagpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge