Mrityunjoy Chakraborty
Indian Institute of Technology Kharagpur
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mrityunjoy Chakraborty.
IEEE Transactions on Circuits and Systems | 2014
Bijit Kumar Das; Mrityunjoy Chakraborty
In practice, one often encounters systems that have a sparse impulse response, with the degree of sparseness varying over time. This paper presents a new approach to identify such systems which adapts dynamically to the sparseness level of the system and thus works well both in sparse and non-sparse environments. The proposed scheme uses an adaptive convex combination of the LMS algorithm and the recently proposed, sparsity-aware zero-attractor LMS (ZA-LMS) algorithm. It is shown that while for non-sparse systems, the proposed combined filter always converges to the LMS algorithm (which is better of the two filters for non-sparse case in terms of lesser steady state excess mean square error (EMSE)), for semi-sparse systems, on the other hand, it actually converges to a solution that produces lesser steady state EMSE than produced by either of the component filters. For highly sparse systems, depending on the value of a proportionality constant in the ZA-LMS algorithm, the proposed combined filter may either converge to the ZA-LMS based filter or may produce a solution which, like the semi-sparse case, outperforms both the constituent filters. A simplified update formula for the mixing parameter of the adaptive convex combination is also presented. The proposed algorithm requires much less complexity than the existing algorithms and its claimed robustness against variable sparsity is well supported by simulation results.
IEEE Transactions on Speech and Audio Processing | 2005
Mrityunjoy Chakraborty; Hideaki Sakai
Often one encounters the presence of tonal noise in many active noise control applications. Such noise, usually generated by periodic noise sources like rotating machines, is cancelled by synthesizing the so-called antinoise by a set of adaptive filters which are trained to model the noise generation mechanism. Performance of such noise cancellation schemes depends on, among other things, the convergence characteristics of the adaptive algorithm deployed. In this paper, we consider a multireference complex least mean square (LMS) algorithm that can be used to train a set of adaptive filters to counter an arbitrary number of periodic noise sources. A deterministic convergence analysis of the multireference algorithm is carried out and necessary as well as sufficient conditions for convergence are derived by exploiting the properties of the input correlation matrix and a related product matrix. It is also shown that under convergence condition, the energy of each error sequence is independent of the tonal frequencies. An optimal step size for fastest convergence is then worked out by minimizing the error energy.
IEEE Transactions on Signal Processing | 2005
Abhijit Mitra; Mrityunjoy Chakraborty; Hideaki Sakai
An efficient scheme is presented for implementing the LMS-based transversal adaptive filter in block floating-point (BFP) format, which permits processing of data over a wide dynamic range, at temporal and hardware complexities significantly less than that of a floating-point processor. Appropriate BFP formats for both the data and the filter coefficients are adopted, taking care so that they remain invariant to interblock transition and weight updating operation, respectively. Care is also taken to prevent overflow during filtering, as well as weight updating processes jointly, by using a dynamic scaling of the data and a slightly reduced range for the step size, with the latter having only marginal effect on convergence speed. Extensions of the proposed scheme to the sign-sign LMS and the signed regressor LMS algorithms are taken up next, in order to reduce the processing time further. Finally, a roundoff error analysis of the proposed scheme under finite precision is carried out. It is shown that in the steady state, the quantization noise component in the output mean-square error depends on the step size both linearly and inversely. An optimum step size that minimizes this error is also found out.
IEEE Transactions on Circuits and Systems Ii-express Briefs | 2015
Rajib Lochan Das; Mrityunjoy Chakraborty
In this paper, a new convergence analysis is presented for a well-known sparse adaptive filter family, namely, the proportionate-type normalized least mean square (PtNLMS) algorithms, where, unlike all the existing approaches, no assumption of whiteness is made on the input. The analysis relies on a “transform” domain based model of the PtNLMS algorithms and brings out certain new convergence features not reported earlier. In particular, it establishes the universality of the steady-state excess mean square error formula derived earlier under white input assumption. In addition, it brings out a new relation between the mean square deviation of each tap weight and the corresponding gain factor used in the PtNLMS algorithm.
IEEE Signal Processing Letters | 2013
Siddhartha Satpathi; Rajib Lochan Das; Mrityunjoy Chakraborty
The generalized Orthogonal Matching Pursuit (gOMP) is a recently proposed compressive sensing greedy recovery algorithm which generalizes the OMP algorithm by selecting N( ≥ 1) atoms in each iteration. In this letter, we demonstrate that the gOMP can successfully reconstruct a K-sparse signal from a compressed measurement y=Φx by a maximum of K iterations if the sensing matrix Φ satisfies the Restricted Isometry Property (RIP) of order NK, with the RIP constant δ<sub>NK</sub> satisfying δ<sub>NK</sub> <; √N/√K+2√N. The proposed bound is an improvement over the existing bound on δ<sub>NK</sub>. We also show that by increasing the RIP order just by one (i.e., NK+1 from NK), it is possible to refine the bound further to δ<sub>NK+1</sub> <; √N/√K+√N, which is consistent (for N=1) with the near optimal bound on δ<sub>K+1</sub> in OMP.
IEEE Transactions on Circuits and Systems Ii-express Briefs | 2005
Mrityunjoy Chakraborty; Anindya Sundar Dhar; Moon Ho Lee
This paper presents an alternate formulation of the least mean square (LMS) algorithm by using a set of angle variables monotonically related to the filter coefficients. The algorithm updates the angles directly instead of the filter coefficients and relies on quantities that can be realized by simple CORDIC rotations. Two architectures based on pipelined CORDIC unit are proposed which achieve efficiency either in time or in area. Further simplifications result from extending the approach to the sign-sign LMS algorithm. An approximate convergence analysis of the proposed algorithm, along with simulation results showing its convergence characteristics are presented.
IEEE Transactions on Signal Processing | 1998
Mrityunjoy Chakraborty
An efficient algorithm is presented for inverting matrices which are periodically Toeplitz, i.e., whose diagonal and subdiagonal entries exhibit periodic repetitions. Such matrices are not per symmetric and thus cannot be inverted by Trenchs (1964) method. An alternative approach based on appropriate matrix factorization and partitioning is suggested. The algorithm provides certain insight on the formation of the inverse matrix, is implementable on a set of circularly pipelined processors and, as a special case, can be used for inverting a set of block Toeplitz matrices without requiring any matrix operation.
international symposium on circuits and systems | 2012
Rajib Lochan Das; Mrityunjoy Chakraborty
In this paper, we provide an overview of the major developments in the area of sparse adaptive filters, starting from the celebrated works on PNLMS algorithm and its several variants to more recent approaches that use compressed sensing framework, more specifically, LASSO and basis pursuit or matching pursuit, to develop sparse adaptive algorithms with improved mean square error and tracking properties. Subsequently, we also present a new approach to identify sparse systems with time varying sparseness, for which a novel scheme of cooperative learning involving a PNLMS and a NLMS based adaptive filters is developed.
IEEE Signal Processing Letters | 2007
Mrityunjoy Chakraborty; Hing Cheung So; Zheng Jun
In this letter, we address the problem of adaptively estimating the time delay of a noisy sinusoid received at two spatially separated sensors. By choosing the sampling frequency equal to four times the signal frequency, a simple adaptive algorithm for direct delay estimation is derived. Algorithm convergence in mean and mean square error is proved. Computer simulations are also included to demonstrate the effectiveness of the proposed method.
IEEE Signal Processing Letters | 2005
Mrityunjoy Chakraborty; Abhijit Mitra
We present a novel scheme to implement the gradient adaptive lattice (GAL) algorithm using block floating point (BFP) arithmetic that permits processing of data over a wide dynamic range at a cost significantly less than that of a floating point (FP) processor. Appropriate formats for the input data, the prediction errors, and the reflection coefficients are adopted, taking care so that for the prediction errors and the reflection coefficients, they remain invariant to the respective order and time update processes. Care is also taken to prevent overflow during prediction error computation and reflection coefficient updating by using an appropriate exponent assignment algorithm and an upper bound on the step-size mantissa.