Sumit Chakravarty
New York Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sumit Chakravarty.
conference on information and knowledge management | 2011
Madhushri Banerjee; Sumit Chakravarty
Data Mining often suffers from the curse of dimensionality. Huge numbers of dimensions or attributes in the data pose serious problems to the data mining tasks. Traditionally data dimensionality reduction techniques like Principal Component Analysis have been used to address this problem.However, the need might be to remain in the original attribute space and identify the key predictive attributes instead of moving to a transformed space. As a result feature subset selection has become an important area of research over the last few years. With the advent of network technologies data is sometimes distributed in multiple locations and often with multiple parties. The biggest concern while sharing data is data privacy. Here, in this paper a secure distributed protocol is proposed that will allow feature selection for multiple parties without revealing their own data. The proposed distributed feature selection method has evolved from a method called virtual dimension reduction used in the field of hyperspectral image processing for selection of subset of hyperspectral bands for further analysis. The experimental results with real life datasets presented in this paper will demonstrate the effectiveness of the proposed method.
Proceedings of SPIE | 2015
Yixuan Sun; Sumit Chakravarty
In this paper we are address two issues regarding cognitive radio spectrum sensing. Spectrum sensing for cognitive radio has been extensively studied in recent past and multiple techniques have been proposed. One such technique is entropy based detection. In entropy based detection we measure the entropy of the received signal after converting it to frequency domain. The logic is that in frequency domain, the entropy of noise (assuming its AWGN) is higher than the signal, thereby enabling us to segment noise from signal by using entropy based threshold. This approach however makes some assumptions which may not be valid. It assumes at a time only one of the two( signal / noise) is present. It further assumes that a given test segment is either a signal or a noise segment. The length of the segment in such a scenario would be fixed /known. These assumptions may be too constraining and we propose alternate method to address the above issues. We use a filtering technique in form of Independent Component Analysis to segment the signal and further use additional techniques like energy weight-age to weigh the components to estimate the signal strength. We test our proposed method for a variety of signals include image, audio and sinusoidal signals. Results show the improvement in performance as well as the availability of new measures as generated from our proposed technique.
international conference on signal processing | 2014
Pai Zhu; Hong Wang; Xining Wang; Sumit Chakravarty; Donglin Wang
Dynamic spectrum access is an effective approach to improve the spectrum utilization. When primary user (PU) leases its spectrum to secondary users (SU), the spectrum efficiency can be enhanced, however, PU might be interfered by the co-existence of SU. This paper considers the situation that PU is willing to coexist with SU once it is paid and its Signal to Interference-plus-Noise Ratio (SINR) is guaranteed. The lease fee could be charged by time or data flow. A continuous time Markov chain (CTMC) is built to measure PUs benefits and potential losses. In computing the steady probabilities of the CTMC, the augmented matrix is devised. The error detection is also considered in the CTMC and a method is proposed to choose the decision threshold which can ensure the fair contention among the SUs. Simulations show the expected profits of PU and the fair competition among SUs. A specific application is considered in the simulation, which can help decide whether the cooperation between SUs in a relay channel is profitable.
international conference on service operations and logistics, and informatics | 2014
Pai Zhu; Chaoqun Dong; Ruimin Hu; Jiankun Chen; Sumit Chakravarty; Donglin Wang
Cognitive Radio(CR) is a promising technology to fully utilize the constrained spectrum, and cooperative spectrum sharing in cognitive network is regarded as an important and challenging issue. This paper considers that primary user(PU) is willing to coexist with secondary users(SUs) once it is paid and its Signal to Interference-plus-Noise Ratio(SINR) is assured. Since the spectrum state is not deterministic, an 8-state continuous time Markov chain(CTMC-8) based on the scenario of 1PU and 2SUs is built to predict the reward and cost of PU in a future period. The formula of expected profit difference(EPD) between cooperative state and uncooperative state is deduced to help PU decide whether to permit the cooperation in a future period. The SINR of relay user is also assured in a way in order to prevent the cooperation from being badly affected. Simulation shows the predictions of the profit before the cooperation and after the cooperation respectively, and shows that the model can effectively help decide whether to permit the cooperation in different location distributions.
Proceedings of SPIE | 2014
Sumit Chakravarty; Chaoqun Dong; Boyu Wang; Madhushri Banerjee
Advanced Kalman Filters has been used extensively in the domain of video based tracking of target objects. They can be viewed as an extension of Kalman Filtering principle. Instead of using object point mass as a tracker as used in the Kalman filter, alterations are made to incorporate advanced strategies. This is the typical formulation of the Kalman Enhanced Filter (KEF). Even though this allows the use of non-linearity for state prediction, it is constrained by its choice of the Kalman state transition function. Furthermore the KEF does not provide a methodology of selection of the distribution of the prior. The proper tuning of the above choices is critical for performance of the KEF. This work addresses these constraints of the KEF. It particularly targets two significant areas. Firstly it automates the state matrix generation process by fusing alternate tracking mechanism to the KEF. This novel technique is tested for tracking of real video sequence and its efficacy is quantified.
machine vision applications | 2013
Xuezhang Hu; Sumit Chakravarty; Qi She; Boyu Wang
Video object segmentation entails selecting and extracting objects of interest from a video sequence. Video Segmentation of Objects (VSO) is a critical task which has many applications, such as video edit, video decomposition and object recognition. The core of VSO system consists of two major problems of computer vision, namely object segmentation and object tracking. These two difficulties need to be solved in tandem in an efficient manner to handle variations in shape deformation, appearance alteration and background clutter. Along with segmentation efficiency computational expense is also a critical parameter for algorithm development. Most existing methods utilize advanced tracking algorithms such as mean shift and particle filter, applied together with object segmentation schemes like Level sets or graph methods. As video is a spatiotemporal data, it gives an extensive opportunity to focus on the regions of high spatiotemporal variation. We propose a new algorithm to concentrate on the high variations of the video data and use modified hierarchical processing to capture the spatiotemporal variation. The novelty of the research presented here is to utilize a fast object tracking algorithm conjoined with graph cut based segmentation in a hierarchical framework. This involves modifying both the object tracking algorithm and the graph cut segmentation algorithm to work in an optimized method in a local spatial region while also ensuring all relevant motion has been accounted for. Using an initial estimate of object and a hierarchical pyramid framework the proposed algorithm tracks and segments the object of interest in subsequent frames. Due to the modified hierarchal framework we can perform local processing of the video thereby enabling the proposed algorithm to target specific regions of the video where high spatiotemporal variations occur. Experiments performed with high frame rate video data shows the viability of the proposed approach.
international conference on control applications | 2013
Jingting Yao; Matthew R. Abreu; Sumit Chakravarty
Ultrasonic motors (USM) are a new type of high precision positioning drive system. They provide significant advantages compared to traditional electromagnetic motors like fast control, high torque, low electromagnetic interference and light weight. It is however difficult to formulate exact mathematical model of the ultrasonic motor due to complex nonlinearities involved as a result of its use of friction and inverse piezoelectric phenomena as its driving mechanism. These nonlinearities pose significant problem for precise position control of the motor. Previous research have therefore suggested developing control design using elaborate nonlinear control schemes like Sliding Mode Control (SMC) used in tandem with advanced machine learning algorithms like genetic algorithms. The drawback of implementing these approaches is the computational cost and complexity involved. In this paper we present a new schema for control of USM. We divide the problem into two complementary parts, namely estimation of the motors parameters and controller design based on the estimator result. Depending on the precision requirement of the application the estimation process can be made nonlinear and /or time varying. Similarly based on the affordable computational complexity, we can choose a linear or a nonlinear controller. Results of experiments conducted show the comparative performance of these different categories. Guidelines are suggested for suitable combination of them depending upon a users requirement.
Proceedings of SPIE | 2013
Sumit Chakravarty; Wenjie Cao; Alim Samat
This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.
Proceedings of SPIE | 2013
Madhushri Banerjee; Sumit Chakravarty; Huiling Da
Curse of dimensionality often hinders the process of data mining. The data collected and analyzed generally contains huge number of dimensions or attributes and it may be the case that not all of the attributes are necessary for the data mining task to be performed on the data. Traditionally data dimensionality reduction techniques like Principal Component Analysis or Linear Discriminant analysis have been used to address this problem. But, these methods move the original data to a transformed space. However, the need might be to remain in the original attribute space and identify the key attributes for data analysis. This need has given rise to the research area of feature subset selection. In this paper we have used solid angle measure to tackle the problem of dimension reduction in OCT retinal data. Optical Coherence Tomography (OCT) is a frequently used and established medical imaging technique. It is widely used, among other application, to obtain high-resolution images of the retina and the anterior segment of the eye. Solid angle measure is used to characterize and select features obtained from OCT retinal images. The application of solid angle in feature selection, as proposed in this paper, is a unique approach to OCT image data mining. The experimental results with real life datasets presented in this paper will demonstrate the effectiveness of the proposed method.
Proceedings of SPIE | 2012
Mausumi Acharyya; Sumit Chakravarty; Rajiv Raman
In this paper we propose to develop a computer assisted reading (CAR) tool for ocular disease. This involves identification and quantitative description of the patterns in retinal vasculature. The features taken into account are fractal dimension and vessel branching. Subsequently a measure combining all these features are designed which would help in quantifying the progression of the disease. The aim of the research is to develop algorithms that would help with parameterization of the eye fundus images, thus improving the diagnostics.