Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debi Prosad Dogra is active.

Publication


Featured researches published by Debi Prosad Dogra.


Pattern Recognition Letters | 2017

Coupled HMM-based multi-sensor data fusion for sign language recognition

Pradeep Kumar; Himaanshu Gauba; Partha Pratim Roy; Debi Prosad Dogra

A novel multi-sensor framework is proposed for Sign Language Recognition (SLR).The framework recognize dynamic sign words performed by hearing impaired persons.A recognition combination framework using Coupled-HMM (CHMM) is proposed for SLR.Results shows higher accuracy on CHMM over uni-modal and other fusion approaches. Recent development of low cost depth sensors such as Leap motion controller and Microsoft kinect sensor has opened up new opportunities for Human-Computer-Interaction (HCI). In this paper, we propose a novel multi-sensor fusion framework for Sign Language Recognition (SLR) using Coupled Hidden Markov Model (CHMM). CHMM provides interaction in state-space instead of observation states as used in classical HMM that fails to model correlation between inter-modal dependencies. The framework has been used to recognize dynamic isolated sign gestures performed by hearing impaired persons. The dataset has been tested using existing data fusion approaches. The best recognition accuracy has been achieved as high as 90.80% with CHMM. Our CHMM-based approach shows improvement in recognition performance over popular existing data fusion techniques.


IEEE Transactions on Biomedical Engineering | 2016

Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning

K. M. Vamsikrishna; Debi Prosad Dogra; Maunendra Sankar Desarkar

Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.


Neurocomputing | 2017

A multimodal framework for sensor based sign language recognition

Pradeep Kumar; Himaanshu Gauba; Partha Pratim Roy; Debi Prosad Dogra

Abstract In this paper, we propose a novel multimodal framework for isolated Sign Language Recognition (SLR) using sensor devices. Microsoft Kinect and Leap motion sensors are used in our framework to capture finger and palm positions from two different views during gesture. One sensor (Leap Motion) is kept below the hand(s) while the other (Kinect) is placed in front of the signer for capturing horizontal and vertical movement of fingers during sign gestures. A set of features is next extracted from the raw data captured with both sensors. Recognition is performed separately by Hidden Markov Model (HMM) and Bidirectional Long Short-Term Memory Neural Network (BLSTM-NN) based sequential classifiers. In the next phase, results are combined to boost-up the recognition performance. The framework has been tested on a dataset of 7500 Indian Sign Language (ISL) gestures comprised with 50 different sign-words. Our dataset includes single as well as double handed gestures. It has been observed that, accuracies can be improved if data from both sensors are fused as compared to single sensor-based recognition. We have recorded improvements of 2.26% (single hand) and 0.91% (both hands) using HMM and 2.88% (single hand) and 1.67% (both hands) using BLSTM-NN classifiers. Overall accuracies of 97.85% and 94.55% have been recorded by combining HMM and BLSTM-NN for single hand and double handed signs.


IEEE Sensors Journal | 2017

Study of Text Segmentation and Recognition Using Leap Motion Sensor

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Recognition of 3-D texts drawn by fingers using Leap motion sensor can be challenging for existing text recognition frameworks. The texts sensed by Leap motion device are different from traditional offline and on-line writing systems. This is because of frequent jitters and non-uniform character sizes while writing using Leap motion interface. Moreover, because of air writing, characters, words, and lines are usually connected by continuous stroke that makes it difficult to recognize. In this paper, we present a study of segmentation and recognition of text recorded using Leap motion sensor. The segmentation task of continuous text into words is performed using a heuristic analysis of stroke length between two successive words. Next, the recognition of each segmented word is performed using sequential classifiers. In this paper, we have performed 3-D text recognition using hidden Markov model (HMM) and bidirectional long short-term memory neural networks (BLSTM-NNs). We have created a data set consisting of 560 Latin sentences drawn by ten participants using Leap motion sensor for experiments. An accuracy of 78.2% has been obtained in word segmentation, whereas 86.88% and 81.25% accuracies have been recorded in word recognition using BLSTM-NN and HMM classifiers, respectively.


Neural Networks | 2017

Prediction of advertisement preference by fusing EEG response and sentiment analysis

Himaanshu Gauba; Pradeep Kumar; Partha Pratim Roy; Priyanka Singh; Debi Prosad Dogra; Balasubramanian Raman

This paper presents a novel approach to predict rating of video-advertisements based on a multimodal framework combining physiological analysis of the user and global sentiment-rating available on the internet. We have fused Electroencephalogram (EEG) waves of user and corresponding global textual comments of the video to understand the users preference more precisely. In our framework, the users were asked to watch the video-advertisement and simultaneously EEG signals were recorded. Valence scores were obtained using self-report for each video. A higher valence corresponds to intrinsic attractiveness of the user. Furthermore, the multimedia data that comprised of the comments posted by global viewers, were retrieved and processed using Natural Language Processing (NLP) technique for sentiment analysis. Textual contents from review comments were analyzed to obtain a score to understand sentiment nature of the video. A regression technique based on Random forest was used to predict the rating of an advertisement using EEG data. Finally, EEG based rating is combined with NLP-based sentiment score to improve the overall prediction. The study was carried out using 15 video clips of advertisements available online. Twenty five participants were involved in our study to analyze our proposed system. The results are encouraging and these suggest that the proposed multimodal approach can achieve lower RMSE in rating prediction as compared to the prediction using only EEG data.


Multimedia Tools and Applications | 2017

Analysis of EEG signals and its application to neuromarketing

Mahendra Yadava; Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Marketing and promotions of various consumer products through advertisement campaign is a well known practice to increase the sales and awareness amongst the consumers. This essentially leads to increase in profit to a manufacturing unit. Re-production of products usually depends on the various facts including consumption in the market, reviewer’s comments, ratings, etc. However, knowing consumer preference for decision making and behavior prediction for effective utilization of a product using unconscious processes is called “Neuromarketing”. This field is emerging fast due to its inherent potential. Therefore, research work in this direction is highly demanded, yet not reached a satisfactory level. In this paper, we propose a predictive modeling framework to understand consumer choice towards E-commerce products in terms of “likes” and “dislikes” by analyzing EEG signals. The EEG signals of volunteers with varying age and gender were recorded while they browsed through various consumer products. The experiments were performed on the dataset comprised of various consumer products. The accuracy of choice prediction was recorded using a user-independent testing approach with the help of Hidden Markov Model (HMM) classifier. We have observed that the prediction results are promising and the framework can be used for better business model.


Journal of Network and Computer Applications | 2017

A bio-signal based framework to secure mobile devices

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Abstract Nowadays, mobile devices are often equipped with high-end processing units and large storage space. Mobile users usually store personal, official, and large amount of multimedia data. Security of such devices are mainly dependent on PIN (personal identification number), password, bio-metric data, or gestures/patterns. However, these mechanisms have a lot of security vulnerabilities and prone to various types of attacks such as shoulder surfing. The uniqueness of Electroencephalography (EEG) signal can be exploited to remove some of the drawbacks of the existing systems. Such signals can be recorded and transmitted through wireless medium for processing. In this paper, we propose a new framework to secure mobile devices using EEG signals along with existing pattern-based authentication. The pattern based authentication passwords are considered as identification tokens. We have investigated the use of EEG signals recorded during pattern drawing over the screen of the mobile device in the authentication phase. To accomplish this, we have collected EEG signals of 50 users while drawing different patterns. The robustness of the system has been evaluated against 2400 unauthorized attempts made by 30 unauthorized users who have tried to gain access of the device using known patterns of 20 genuine users. EEG signals are modeled using Hidden Markov Model (HMM), and using a binary classifier implemented with Support Vector Machine (SVM) to verify the authenticity of a test pattern. Verification performances are measured using three popular security matrices, namely Detection Error Trade-off (DET), Half Total Error Rate (HTER), and Receiver Operating Characteristic (ROC) curves. Our experiments revel that, the method is promising and can be a possible alternative to develop robust authentication protocols for hand-held devices.


Multimedia Tools and Applications | 2017

3D text segmentation and recognition using leap motion

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

In this paper, we present a method of Human-Computer-Interaction (HCI) through 3D air-writing. Our proposed method includes a natural way of interaction without pen and paper. The online texts are drawn on air by 3D gestures using fingertip within the field of view of a Leap motion sensor. The texts consist of single stroke only. Hence gaps between adjacent words are usually absent. This makes the system different as compared to the conventional 2D writing using pen and paper. We have collected a dataset that comprises with 320 Latin sentences. We have used a heuristic to segment 3D words from sentences. Subsequently, we present a methodology to segment continuous 3D strokes into lines of texts by finding large gaps between the end and start of the lines. This is followed by segmentation of the text lines into words. In the next phase, a Hidden Markov Model (HMM) based classifier is used to recognize 3D sequences of segmented words. We have used dynamic as well as simple features for classification. We have recorded an overall accuracy of 80.3 % in word segmentation. Recognition accuracies of 92.73 % and 90.24 % have been recorded when tested with dynamic and simple features, respectively. The results show that the Leap motion device can be a low-cost but useful solution for inputting text naturally as compared to conventional systems. In future, this may be extended such that the system can successfully work on cluttered gestures.


Journal of Visual Communication and Image Representation | 2012

Evaluation of segmentation techniques using region area and boundary matching information

Debi Prosad Dogra; Arun K. Majumdar; Shamik Sural

Evaluation techniques play an important role while picking a suitable segmentation scheme out of a number of alternatives. In this paper, a novel supervised segmentation evaluation scheme is proposed that is designed by combining segment area and boundary information. Using the evaluation metric, a ranking of the popular segmentation algorithms is carried out. A comparative analysis with existing supervised metrics that are commonly used for grading segmentation schemes is performed. Experimental results indicate that the performance of the proposed measure is promising.


international conference on computer vision theory and applications | 2015

Scene Representation and Anomalous Activity Detection using Weighted Region Association Graph

Debi Prosad Dogra; Rohit Desam Reddy; K. S. Subramanyam; Arif Ahmed; Harish Bhaskar

In this paper we present a novel method for anomalous activity detection using systematic trajectory analysis. First, the visual scene is segmented into constituent regions by attaching importances based on motion dynamics of targets in that scene. Further, a structured representation of these segmented regions in the form of a region association graph (RAG) is constructed. Finally, anomalous activity is detected by benchmarking the target’s trajectory against the RAG. We have evaluated our proposed algorithm and compared it against competent baselines using videos from publicly available as well as in-house datasets. Our results indicate high accuracy in localizing anomalous segments and demonstrate that the proposed algorithm has several compelling advantages when applied to scene analysis in autonomous visual surveillance.

Collaboration


Dive into the Debi Prosad Dogra's collaboration.

Top Co-Authors

Avatar

Partha Pratim Roy

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Rajkumar Saini

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Pradeep Kumar

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Arun K. Majumdar

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Shamik Sural

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Samarjit Kar

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Santosh Kumar Behera

Indian Institute of Technology Bhubaneswar

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun Kumar Singh

Memorial Hospital of South Bend

View shared research outputs
Top Co-Authors

Avatar

Suchandra Mukherjee

Memorial Hospital of South Bend

View shared research outputs
Researchain Logo
Decentralizing Knowledge