Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanwi Mallick is active.

Publication


Featured researches published by Tanwi Mallick.


IEEE Sensors Journal | 2014

Characterizations of Noise in Kinect Depth Images: A Review

Tanwi Mallick; Partha Pratim Das; Arun K. Majumdar

In this paper, we characterize the noise in Kinect depth images based on multiple factors and introduce a uniform nomenclature for the types of noise. In the process, we briefly survey the noise models of Kinect and relate these to the factors of characterization. We also deal with the noise in multi-Kinect set-ups and summarize the techniques for the minimization of interference noise. Studies on noise in Kinect depth images are distributed over several publications and there is no comprehensive treatise on it. This paper, to the best of our knowledge, is the maiden attempt to characterize the noise behavior of Kinect depth images in a structured manner. The characterization would help to selectively eliminate noise from depth images either by filtering or by adopting appropriate methodologies for image capture. In addition to the characterization based on the results reported by others, we also conduct independent experiments in a number of cases to fill up the gaps in characterization and to validate the reported results.


computer vision and pattern recognition | 2013

Estimation of the orientation and distance of a mirror from Kinect depth data

Tanwi Mallick; Partha Pratim Das; Arun K. Majumdar

In many common applications of Microsoft Kinect™ including navigation, surveillance, 3D reconstruction, and the like; it is required to estimate the geometry of mirrors or other reflecting surfaces existing in the field of view. This often is difficult as in most positions a mirror does not support diffuse reflection of speckles and hence cannot be seen in the Kinect depth map. A mirror shows up as unknown depth. However, suitably placed objects reflecting in the mirror can provide important clues for the orientation and distance of the mirror. In this paper we present a method using a ball and its mirror image to set-up point-to-point correspondence between object and image points to solve for the geometry of the mirror. With this simple estimators are designed for the orientation and distance of a plane vertical mirror with respect to the Kinect camera. In addition an estimator is presented for the diameter of the ball. The estimators are validated through a set of experiments.


Archive | 2018

Characterization, Detection, and Synchronization of Audio-Video Events in Bharatanatyam Adavu s

Tanwi Mallick; Partha Pratim Das; Arun K. Majumdar

Bharatanatyam is the most popular form of Indian Classical Dance. Its Adavus are basic choreographic units of a dance sequence. An Adavu is accompanied by percussion and vocal music and follows a specific rhythmic pattern (Sollukattu). In this paper, we first characterize the audio, video, and sync events of Adavus to succinctly represent the Adavus. Then, we present simple yet effective algorithms to detect audio and video events and measure their synchronization. The audio, video, and sync event detection achieve 94%, 84%, and 72% accuracy, respectively. A comparison of our audio event detection against a well-known method by Ellis shows significant improvement. We also create an annotated repository of Sollukattus and Adavus for research. There are several applications of the characterization and beat detection including music/music video segmentation, synchronization of the postures with the beats, automatic tagging of rhythm metadata, etc. Characterization of events or repository of Bharatanatyam Adavus has not been attempted before.


computer vision and pattern recognition | 2017

Automated Translation of Human Postures from Kinect Data to Labanotation.

Anindhya Sankhla; Vinanti Kalangutkar; Himadri B. G. S. Bhuyan; Tanwi Mallick; Vivek Nautiyal; Partha Pratim Das; Arun K. Majumdar

We present a non-intrusive automated system to translate human postures into Labanotation, a graphical notation for human postures and movements. The system uses Kinect to capture the human postures, identifies the positions and formations of the four major limbs: two hands and two legs, converts to the vocabulary of Labanotation and finally translates to a parseable LabanXML representation. We use the skeleton stream to classify the formations of the limbs using multi-class support vector machines. Encoding to XML is performed based on Labanotation specification. A data set of postures is created and annotated for training the classifier and to test its performance. We achieve 80% to 90% accuracy for the 4 limbs. The system can be used as an effective front-end for posture analysis applications in various areas like dance and sports where predefined postures form the basis for analysis and interpretation. The parseability of XML makes it easy for integration in a platform independent manner.


computer vision and pattern recognition | 2017

NrityaGuru: A Dance Tutoring System for Bharatanatyam Using Kinect

Achyuta Aich; Tanwi Mallick; Himadri B. G. S. Bhuyan; Partha Pratim Das; Arun K. Majumdar

Indian Classical Dance (ICD) is a living heritage of India. Traditionally Gurus (teachers) are the custodians of this heritage. They practice and pass on the legacy through their Shishyas (disciples), often in undocumented forms. The preservation of the heritage, thus, remains limited in time and scope. Emergence of digital multimedia technology has created the opportunity to preserve heritage by ensuring that it can be accessible over a long period of time. However, there have been only limited attempts to use effective technologies either in the pedagogy of learning dance or in the preservation of heritage of ICD. In this context, the paper presents NrityaGuru – a tutoring system for Bharatanatyam – a form of ICD. Using Kinect Xbox to capture dance videos in multi-modal form, we design a system that can help a learner dancer identify deviations in her dance postures and movements against the prerecorded benchmark performances of the tutor (Guru).


international conference on computer vision theory and applications | 2016

Fast Gait Recognition from Kinect Skeletons

Tanwi Mallick; Ankit Khedia; Partha Pratim Das; Arun K. Majumdar

Recognizing persons from gait has attracted attention in computer vision research for over a decade and a half. To extract the motion information in gait, researchers have either used wearable markers or RGB videos. Markers naturally offer good accuracy and reliability but has the disadvantage of being intrusive and expensive. RGB images, on the other hand, need high processing time to achieve good accuracy. Advent of low-cost depth data from Kinect 1.0 and its human-detection and skeleton-tracking abilities have opened new opportunities in gait recognition. Using skeleton data it gets cheaper and easier to get the body-joint information that can provide critical clue to gait-related motions. In this paper, we attempt to use the skeleton stream from Kinect 1.0 for gait recognition. Various types of gait features are extracted from the joint-points in the stream and the appropriate classifiers are used to compute effective matching scores. To test our system and compare performance, we create a benchmark data set of 5 walks each for 29 subjects and implement a state-of-the-art gait recognizer for RGB videos. Tests show a moderate accuracy of 65% for our system. This is low compared to the accuracy of RGB-based method (which achieved 83% on the same data set) but high compared to similar skeleton-based approaches (usually below 50%). Further we compare execution time of various parts of our system to highlight efficiency advantages of our method and its potential as a real-time recogniser if an optimized implementation can be done.


international conference on computer vision theory and applications | 2016

Facial Emotion Recognition from Kinect Data – An Appraisal of Kinect Face Tracking Library

Tanwi Mallick; Palash Goyal; Partha Pratim Das; Arun K. Majumdar

Facial expression classification and emotion recognition from gray-scale or colour images or videos have been extensively explored over the last two decades. In this paper we address the emotion recognition problem using Kinect 1.0 data and the Kinect Face Tracking Library (KFTL). A generative approach based on facial muscle movements is used to classify emotions. We detect various Action Units (AUs) of the face from the feature points extracted by KFTL and then recognize emotions by Artificial Neural Networks (ANNs) based on the detected AUs. We use six emotions, namely, Happiness, Sadness, Fear, Anger, Surprise and Neutral for our work and appraise the strengths and weaknesses of KFTL in terms of feature extraction, AU computations, and emotion detection. We compare our work with earlier studies on emotion recognition from Kinect 1.0 data.


international conference on computer vision theory and applications | 2015

Omni-directional Reconstruction of Human Figures from Depth Data using Mirrors

Tanwi Mallick; Rishabh Agrawal; Partha Pratim Das; Arun K. Majumdar

In this paper we present a method for omni-directional 3D reconstruction of a human figure using a single Kinect while two mirrors provide the 360o view. We get three views from a single depth (and its corresponding RGB) frame – one is the real view of the human and other two are the virtual views generated through the mirrors. Using these three views our proposed system reconstruct 360o view of a human. The reconstruction system is robust as it can reconstruct the 360o view of any object (though it is particularly designed for human figures) from single depth and RGB images. These system overcomes the difficulties of synchronization and removes the problem of interference noise of multi-Kinect system. The methodology can be used for a nonKinect RGB-D camera and can be improved in several ways in future.


computer vision and pattern recognition | 2015

Robust control of applications by hand-gestures

Aakash Anuj; Tanwi Mallick; Partha Pratim Das; Arun K. Majumdar

We use the RGB-D technology of Kinect to control an application with hand-gestures. We use PowerPoint for test. The system can start/end PPT, navigate between slides, capture or release the control of the cursor, and control it through natural gestures. Such a system is useful and hygienic in the kitchen, lavatories, hospital ICUs for touch-less surgery, and the like. The challenge is to extract meaningful gestures from continuous hand motions. We propose a system that recognizes isolated gestures from continuous hand motions for multiple gestures in real-time. Experimental results show that the system has 96.48% precision (at 96.00% recall) and performs better than the Microsoft Gesture Recognition library for swipe gestures.


international conference on computer vision theory and applications | 2014

Study of Interference Noise in multi-Kinect set-up

Tanwi Mallick; Partha Pratim Das; Arun K. Majumdar

Collaboration


Dive into the Tanwi Mallick's collaboration.

Top Co-Authors

Avatar

Arun K. Majumdar

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Partha Pratim Das

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Himadri B. G. S. Bhuyan

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Aakash Anuj

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Achyuta Aich

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Anindhya Sankhla

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Vinanti Kalangutkar

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Vivek Nautiyal

Indian Institute of Technology Kharagpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge