Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajkumar Saini is active.

Publication


Featured researches published by Rajkumar Saini.


IEEE Sensors Journal | 2017

Study of Text Segmentation and Recognition Using Leap Motion Sensor

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Recognition of 3-D texts drawn by fingers using Leap motion sensor can be challenging for existing text recognition frameworks. The texts sensed by Leap motion device are different from traditional offline and on-line writing systems. This is because of frequent jitters and non-uniform character sizes while writing using Leap motion interface. Moreover, because of air writing, characters, words, and lines are usually connected by continuous stroke that makes it difficult to recognize. In this paper, we present a study of segmentation and recognition of text recorded using Leap motion sensor. The segmentation task of continuous text into words is performed using a heuristic analysis of stroke length between two successive words. Next, the recognition of each segmented word is performed using sequential classifiers. In this paper, we have performed 3-D text recognition using hidden Markov model (HMM) and bidirectional long short-term memory neural networks (BLSTM-NNs). We have created a data set consisting of 560 Latin sentences drawn by ten participants using Leap motion sensor for experiments. An accuracy of 78.2% has been obtained in word segmentation, whereas 86.88% and 81.25% accuracies have been recorded in word recognition using BLSTM-NN and HMM classifiers, respectively.


Multimedia Tools and Applications | 2017

Analysis of EEG signals and its application to neuromarketing

Mahendra Yadava; Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Marketing and promotions of various consumer products through advertisement campaign is a well known practice to increase the sales and awareness amongst the consumers. This essentially leads to increase in profit to a manufacturing unit. Re-production of products usually depends on the various facts including consumption in the market, reviewer’s comments, ratings, etc. However, knowing consumer preference for decision making and behavior prediction for effective utilization of a product using unconscious processes is called “Neuromarketing”. This field is emerging fast due to its inherent potential. Therefore, research work in this direction is highly demanded, yet not reached a satisfactory level. In this paper, we propose a predictive modeling framework to understand consumer choice towards E-commerce products in terms of “likes” and “dislikes” by analyzing EEG signals. The EEG signals of volunteers with varying age and gender were recorded while they browsed through various consumer products. The experiments were performed on the dataset comprised of various consumer products. The accuracy of choice prediction was recorded using a user-independent testing approach with the help of Hidden Markov Model (HMM) classifier. We have observed that the prediction results are promising and the framework can be used for better business model.


Journal of Network and Computer Applications | 2017

A bio-signal based framework to secure mobile devices

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Abstract Nowadays, mobile devices are often equipped with high-end processing units and large storage space. Mobile users usually store personal, official, and large amount of multimedia data. Security of such devices are mainly dependent on PIN (personal identification number), password, bio-metric data, or gestures/patterns. However, these mechanisms have a lot of security vulnerabilities and prone to various types of attacks such as shoulder surfing. The uniqueness of Electroencephalography (EEG) signal can be exploited to remove some of the drawbacks of the existing systems. Such signals can be recorded and transmitted through wireless medium for processing. In this paper, we propose a new framework to secure mobile devices using EEG signals along with existing pattern-based authentication. The pattern based authentication passwords are considered as identification tokens. We have investigated the use of EEG signals recorded during pattern drawing over the screen of the mobile device in the authentication phase. To accomplish this, we have collected EEG signals of 50 users while drawing different patterns. The robustness of the system has been evaluated against 2400 unauthorized attempts made by 30 unauthorized users who have tried to gain access of the device using known patterns of 20 genuine users. EEG signals are modeled using Hidden Markov Model (HMM), and using a binary classifier implemented with Support Vector Machine (SVM) to verify the authenticity of a test pattern. Verification performances are measured using three popular security matrices, namely Detection Error Trade-off (DET), Half Total Error Rate (HTER), and Receiver Operating Characteristic (ROC) curves. Our experiments revel that, the method is promising and can be a possible alternative to develop robust authentication protocols for hand-held devices.


Multimedia Tools and Applications | 2017

3D text segmentation and recognition using leap motion

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

In this paper, we present a method of Human-Computer-Interaction (HCI) through 3D air-writing. Our proposed method includes a natural way of interaction without pen and paper. The online texts are drawn on air by 3D gestures using fingertip within the field of view of a Leap motion sensor. The texts consist of single stroke only. Hence gaps between adjacent words are usually absent. This makes the system different as compared to the conventional 2D writing using pen and paper. We have collected a dataset that comprises with 320 Latin sentences. We have used a heuristic to segment 3D words from sentences. Subsequently, we present a methodology to segment continuous 3D strokes into lines of texts by finding large gaps between the end and start of the lines. This is followed by segmentation of the text lines into words. In the next phase, a Hidden Markov Model (HMM) based classifier is used to recognize 3D sequences of segmented words. We have used dynamic as well as simple features for classification. We have recorded an overall accuracy of 80.3 % in word segmentation. Recognition accuracies of 92.73 % and 90.24 % have been recorded when tested with dynamic and simple features, respectively. The results show that the Leap motion device can be a low-cost but useful solution for inputting text naturally as compared to conventional systems. In future, this may be extended such that the system can successfully work on cluttered gestures.


asian conference on pattern recognition | 2015

Segmentation and recognition of text written in 3D using Leap motion interface

Chelsi Agarwal; Debi Prosad Dogra; Rajkumar Saini; Partha Pratim Roy

In this paper, we present a word extraction and recognition methodology from online cursive handwritten text-lines recorded by Leap motion controller The online text, drawn by 3D gesture in air, is distinct from usual online pen-based strokes. The 3D gestures are recorded in air, hence they produce often non-uniform text style and jitter-effect while writing. Also, due to the constraint of writing in air, the pause of stroke-flow between words is missing. Instead all words and lines are connected by a continuous stroke. In this paper, we have used a simple but effective heuristic to segment words written in air. Here, we propose a segmentation methodology of continuous 3D strokes into text-lines and words. Separation of text lines is achieved by heuristically finding the large gap-information between end and start-positions of successive text lines. Word segmentation is characterized in our system as a two class problem. In the next phase, we have used Hidden Markov Model-based approach to recognize these segmented words. Our experimental validation with a large dataset consisting with 320 sentences reveals that the proposed heuristic based word segmentation algorithm performs with accuracy as high as 80.3%c and an accuracy of 77.6% has been recorded by HMM-based word recognition when these segmented words are fed to HMM. The results show that the framework is efficient even with cluttered gestures.


international conference on machine vision | 2017

Real-time recognition of sign language gestures and air-writing using leap motion

Pradeep Kumar; Rajkumar Saini; Santosh Kumar Behera; Debi Prosad Dogra; Partha Pratim Roy

A sign language is generally composed of three main parts, namely manual signas that are gestures made by hand or fingers movements, non-manual signs such as facial expressions or body postures, and finger-spelling where words are spelt out using gestures by the signers to convey the meaning. In literature, researchers have proposed various Sign Language Recognition (SLR) systems by focusing only one part of the sign language. However, combination of different parts has not been explored much. In this paper, we present a framework to recognize manual signs and finger spellings using Leap motion sensor. In the first phase, Support Vector Machine (SVM) classifier has been used to differentiate between manual and finger spelling gestures. Next, two BLSTM-NN classifiers are used for the recognition of manual signs and finger-spelling gestures using sequence-classification and sequence-transcription based approaches, respectively. A dataset of 2240 sign gestures consisting of 28 isolated manual signs and 28 finger-spelling words, has been recorded involving 10 users. We have obtained an overall accuracy of 63.57% in real-time recognition of sign gestures.


CVIP (1) | 2017

Surveillance Scene Segmentation Based on Trajectory Classification Using Supervised Learning

Rajkumar Saini; Arif Ahmed; Debi Prosad Dogra; Partha Pratim Roy

Scene understanding plays a vital role in the field of visual surveillance and security where we aim to classify surveillance scenes based on two important information, namely scene’s layout and activities or motions within the scene. In this paper, we propose a supervised learning-based novel algorithm to segment surveillance scenes with the help of high-level features extracted from object trajectories. High-level features are computed using a recently proposed nonoverlapping block-based representation of surveillance scene. We have trained Hidden Markov Model (HMM) to learn parameters describing the dynamics of a given surveillance scene. Experiments have been carried out using publicly available datasets and the outcomes suggest that, the proposed methodology can deliver encouraging results for correctly segmenting surveillance with the help of motion trajectories. We have compared the method with state-of-the-art techniques. It has been observed that, our proposed method outperforms baseline algorithms in various contexts such as localization of frequently accessed paths, marking abandoned or inaccessible locations, etc.


ubiquitous computing | 2018

Envisioned speech recognition using EEG sensors

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Pawan Kumar Sahu; Debi Prosad Dogra

Recent advances in EEG technology makes brain-computer-interface (BCI) an exciting field of research. BCI is primarily used to adopt with the paralyzed human body parts. However, BCI in envisioned speech recognition using electroencephalogram (EEG) signals has not been studied in details. Therefore, developing robust speech recognition system using EEG signals was proposed. In this paper, we propose a coarse-to-fine-level envisioned speech recognition framework with the help of EEG signals that can be thought of as a serious contribution in this field of research. Coarse-level classification is used to differentiate/categorize text and non-text classes using random forest (RF) classifier. Next, a finer-level imagined speech recognition of each class has been carried out. EEG data of 30 text and not-text classes including characters, digits, and object images have been imagined by 23 participants in this study. A recognition accuracy of 85.20 and 67.03% has been recorded at coarse- and fine-level classifications, respectively. The proposed framework outperforms the existing research work in terms of accuracy. We also show the robustness in envisioned speech recognition.


CVIP (1) | 2017

Classification of Object Trajectories Represented by High-Level Features Using Unsupervised Learning

Rajkumar Saini; Arif Ahmed; Debi Prosad Dogra; Partha Pratim Roy

Object motion trajectory classification is an important task, often used to detect abnormal movement patterns for taking appropriate actions to prohibit occurrences of unwanted events. Given a set of trajectories recorded over a period of time, they can be clustered to understand usual flow of movement or detection of unusual flow. Automatic traffic management, visual surveillance, behavioral understanding, and sports or scientific video analysis are some of the typical applications that benefit from clustering object trajectories. In this paper, we have proposed an unsupervised way of clustering object trajectories to filter out movements that deviate large from the usual patterns. A scene is divided into nonoverlapping rectangular blocks and importance of each block is estimated. Two statistical parameters that closely describe the dynamic of the block are estimated. Next, these high-level features are used to cluster the set of trajectories using k-means clustering technique. Experimental results using public datasets reveal that, our proposed method can categorize object trajectories with higher accuracy when compared to clustering obtained using raw trajectory data or grouped using complex method such as spectral clustering.


Multimedia Tools and Applications | 2018

A position and rotation invariant framework for sign language recognition (SLR) using Kinect

Pradeep Kumar; Rajkumar Saini; Partha Pratim Roy; Debi Prosad Dogra

Sign language is the only means of communication for speech and hearing impaired people. Using machine translation, Sign Language Recognition (SLR) systems provide medium of communication between speech and hearing impaired and others who have difficulty in understanding such languages. However, most of the SLR systems require the signer to sign in front of the capturing device/sensor. Such systems fail to recognize some gestures when the relative position of the signer is changed or when the body occlusion occurs due to position variations. In this paper, we present a robust position invariant SLR framework. A depth-sensor device (Kinect) has been used to obtain the signer’s skeleton information. The framework is capable of recognizing occluded sign gestures and has been tested on a dataset of 2700 gestures. The recognition process has been performed using Hidden Markov Model (HMM) and the results show the efficiency of the proposed framework with an accuracy of 83.77% on occluded gestures.

Collaboration


Dive into the Rajkumar Saini's collaboration.

Top Co-Authors

Avatar

Partha Pratim Roy

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Debi Prosad Dogra

Indian Institute of Technology Bhubaneswar

View shared research outputs
Top Co-Authors

Avatar

Pradeep Kumar

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Arif Ahmed

Haldia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mahendra Yadava

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Pawan Kumar Sahu

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

R. Balasubramanian

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Umapada Pal

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar

Avirup Bhattacharyya

Indian Institute of Technology Roorkee

View shared research outputs
Top Co-Authors

Avatar

Balasubramanian Raman

Indian Institute of Technology Roorkee

View shared research outputs
Researchain Logo
Decentralizing Knowledge