Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bijit Kumar Das is active.

Publication


Featured researches published by Bijit Kumar Das.


IEEE Transactions on Circuits and Systems | 2014

Sparse Adaptive Filtering by an Adaptive Convex Combination of the LMS and the ZA-LMS Algorithms

Bijit Kumar Das; Mrityunjoy Chakraborty

In practice, one often encounters systems that have a sparse impulse response, with the degree of sparseness varying over time. This paper presents a new approach to identify such systems which adapts dynamically to the sparseness level of the system and thus works well both in sparse and non-sparse environments. The proposed scheme uses an adaptive convex combination of the LMS algorithm and the recently proposed, sparsity-aware zero-attractor LMS (ZA-LMS) algorithm. It is shown that while for non-sparse systems, the proposed combined filter always converges to the LMS algorithm (which is better of the two filters for non-sparse case in terms of lesser steady state excess mean square error (EMSE)), for semi-sparse systems, on the other hand, it actually converges to a solution that produces lesser steady state EMSE than produced by either of the component filters. For highly sparse systems, depending on the value of a proportionality constant in the ZA-LMS algorithm, the proposed combined filter may either converge to the ZA-LMS based filter or may produce a solution which, like the semi-sparse case, outperforms both the constituent filters. A simplified update formula for the mixing parameter of the adaptive convex combination is also presented. The proposed algorithm requires much less complexity than the existing algorithms and its claimed robustness against variable sparsity is well supported by simulation results.


international symposium on circuits and systems | 2011

Adaptive identification of sparse systems with variable sparsity

Bijit Kumar Das; Mrityunjoy Chakraborty; Soumitro Banerjee

In the context of system identification, it is shown that sometimes the level of sparseness in the system impulse response can vary greatly depending on the time-varying nature of the system. When the response is strongly sparse, convergence of the conventional approach such as least mean square (LMS) is poor. The recently proposed, compressive sensing based sparsity-aware ZA-LMS algorithm performs satisfactorily in strongly sparse environments, but is shown to perform worse than the conventional LMS when sparseness of the impulse response reduces. We propose an algorithm which works well both in sparse and non-sparse circumstances and adapts dynamically to the level of sparseness, using a convex combination based approach. The proposed algorithm is supported by simulation results that show its robustness against variable sparsity.


Cognitive Information Processing (CIP), 2014 4th International Workshop on | 2014

A comparative study of two popular families of sparsity-aware adaptive filters

Bijit Kumar Das; Luis Antonio Azpicueta-Ruiz; Mrityunjoy Chakraborty; Jerónimo Arenas-García

In this paper, we review two families for sparsity-aware adaptive filtering. Proportionate-type NLMS filters try to accelerate filter convergence by assigning each filter weight a different gain that depends on its actual value. Sparsity-norm regularized filters penalize the cost function minimized by the filter using sparsity-promoting norms (such as ℓ0 or ℓ1) and derive new stochastic gradient descent rules from the regularized cost function. We compare both families of algorithms in terms of computational complexity and studying how well they deal with the convergence vs steady-state error tradeoff. We conclude that sparsity-norm regularized filters are computationally less expensive and can achieve a better tradeoff, making them more attractive in principle. However, selection of the strength of the regularization term seems to be a critical element for the good performance of these filters.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2017

A Convex Combination of NLMS and ZA-NLMS for Identifying Systems With Variable Sparsity

Bijit Kumar Das; G. Vinay Chakravarthi; Mrityunjoy Chakraborty

This brief aims to identify and track a sparse system with time varying sparseness by a convex combination of two adaptive filters, one based on the sparsity unaware normalized least mean square (NLMS) algorithm and the other based on the sparsity aware zero-attracting NLMS (ZA-NLMS) algorithm. An analysis of the proposed combination is carried out, which reveals that while the proposed combination converges to the ZA-NLMS or the NLMS-based filter for systems that are highly sparse or highly non-sparse, respectively (i.e., better of the two under the given sparsity condition), it may, however, lead to a filter that performs better than both the constituent filters in the case of systems that lie between moderately sparse to moderately non-sparse. The same is confirmed via detailed simulation studies under different sparsity conditions.


international symposium on circuits and systems | 2015

Sparse distributed learning via heterogeneous diffusion adaptive networks

Bijit Kumar Das; Mrityunjoy Chakraborty; Jerónimo Arenas-García

In-network distributed estimation of sparse parameter vectors via diffusion LMS strategies has been studied and investigated in recent years. In all the existing works, some convex regularization approach has been used at each node of the network in order to achieve an overall network performance superior to that of the simple diffusion LMS, albeit at the cost of increased computational overhead. In this paper, we provide analytical as well as experimental results which show that the convex regularization can be selectively applied only to some chosen nodes keeping rest of the nodes sparsity agnostic, while still enjoying the same optimum behavior as can be realized by deploying the convex regularization at all the nodes. Due to the incorporation of unregularized learning at a subset of nodes, less computational cost is needed in the proposed approach. We also provide a guideline for selection of the sparsity aware nodes and a closed form expression for the optimum regularization parameter.


international symposium on circuits and systems | 2017

A block-based convex combination of NLMS and ZA-NLMS for identifying sparse systems with variable sparsity

Bijit Kumar Das; Mrityunjoy Chakraborty

This paper presents a novel block based convex combination of two adaptive filters — the sparsity unaware NLMS and the sparsity aware zero-attracting NLMS (ZA-NLMS) for identifying and tracking a sparse system with variable sparsity. The proposed scheme partitions the system impulse response blockwise. As most often in practice, sparse systems exhibit sparseness in blocks, such partitioning renders inactive blocks fully or almost fully sparse and active blocks highly non-sparse. For active and inactive blocks, the proposed convex combination switches respectively to the NLMS and the ZA-NLMS based filters, with the latter not suffering any performance degradation due to zero-attraction on active taps as the inactive blocks are almost fully sparse. The combined effect of this on the overall filter is a greatly reduced steady state excess mean square error, which is also corroborated by simulation studies.


Digital Signal Processing | 2018

An adaptive convex combination of APA and ZA-APA for identifying systems having variable sparsity and correlated input

Vinay Chakravarthi Gogineni; Bijit Kumar Das; Mrityunjoy Chakraborty

Abstract In this paper, we present an efficient algorithm for identifying and tracking the impulse response of a sparse system that exhibits time varying sparseness and is driven by correlated input. The proposed method convexly combines the outputs of two filters, namely, the sparsity unaware affine projection algorithm (APA), and the sparsity aware zero attracting affine projection algorithm (ZA-APA), each trying to identify the same system using the same input. The combining parameter is adapted by following a steepest descent of the error variance at the convex combination output. A detailed performance analysis of the proposed combination is carried out, which reveals that while for highly non-sparse and highly sparse systems, the proposed combination converges respectively to the APA and ZA-APA (i.e., better of the two filters under the given levels of sparsity), for certain sparsity ranges, it leads to a combination filter that performs better than both the constituent filters. The claims made are validated by exhaustive simulation studies using white, colored as well as speech inputs.


international symposium on circuits and systems | 2016

A new diffusion sparse RLS algorithm with improved convergence characteristics

Bijit Kumar Das; Mrityunjoy Chakraborty

A new sparsity aware recursive least squares (RLS) algorithm is proposed for distributed learning in a diffusion network. The algorithm deploys a RLS based adaptive filter at each node which is made sparsity aware by regularizing the conventional RLS cost function with a sparsity promoting penalty. The regularization introduces certain “zero-attracting” terms in the RLS update equation which help in shrinkage of the coefficients. Each node shares its tap weight information with every other node in its neighborhood and refines its own estimate by linearly combining the incoming tap weight information from neighboring nodes by a set of pre-defined weights. Results on both first and second order convergence of the algorithm are also provided. As simulations show, the proposed scheme outperforms other existing algorithms both in terms of convergence speed and steady state excess mean square error.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2016

Sparse Distributed Estimation via Heterogeneous Diffusion Adaptive Networks

Bijit Kumar Das; Mrityunjoy Chakraborty; Jerónimo Arenas-García

Recently, diffusion networks have been proposed to identify sparse linear systems which employ sparsity-aware algorithms like the zero-attracting LMS (ZA-LMS) at each node to exploit sparsity. In this brief, we show that the same optimum performance as reached by the aforementioned networks can also be achieved by a “heterogeneous” network with only a fraction of the nodes deploying ZA-LMS-based adaptation, provided that the ZA-LMS-based nodes are distributed over the network maintaining some “uniformity.” Reduction in the number of sparsity-aware nodes reduces the overall computational burden of the network. We show analytically and also by simulation studies that the only adjustment needed to achieve this reduction is a proportional increase in the value of the optimum zero attracting coefficient.


international conference on digital signal processing | 2015

On steady state tracking performance of adaptive networks

Bijit Kumar Das; Luis A. Azpicueta Ruiz; Mrityunjoy Chakraborty; Jerónimo Arenas-García

In this paper, we evaluate the steady state tracking EMSEs for the LMS based and the RLS based homogeneous adaptive networks using the popular random walk model for time-varying systems. Subsequently, we extend this treatment to a “heterogeneous” network that deploys both LMS and RLS based nodes, with each node having approximately equal share of LMS and RLS based neighbors. We use intuitive arguments to show that the proposed heterogeneous network will have enhanced tracking capability as compared to its homogeneous counterparts. The claims made are validated by detailed simulation studies.

Collaboration


Dive into the Bijit Kumar Das's collaboration.

Top Co-Authors

Avatar

Mrityunjoy Chakraborty

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Rajib Lochan Das

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Arpita Dutta

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

G. Vinay Chakravarthi

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Samrat Mukhopadhyay

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Santanu Chattopadhyay

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Vinay Chakravarthi Gogineni

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge