Dinesh Mandalapu
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dinesh Mandalapu.
international conference on pattern recognition | 2010
Poonam Suryanarayan; Anbumani Subramanian; Dinesh Mandalapu
Hand pose recognition has been a problem of great interest to the Computer Vision and Human Computer Interaction community for many years and the current solutions either require additional accessories at the user end or enormous computation time. These limitations arise mainly due to the high dexterity of human hand and occlusions created in the limited view of the camera. This work utilizes the depth information and a novel algorithm to recognize scale and rotation invariant hand poses dynamically. We have designed a volumetric shape descriptor enfolding the hand to generate a 3D cylindrical histogram and achieved robust pose recognition in real time.
international conference on pattern recognition | 2010
Manavender R. Malgireddy; Jason J. Corso; Srirangaraj Setlur; Venu Govindaraju; Dinesh Mandalapu
Hand gesture interpretation is an open research problem in Human Computer Interaction (HCI), which involves locating gesture boundaries (Gesture Spotting) in a continuous video sequence and recognizing the gesture. Existing techniques model each gesture as a temporal sequence of visual features extracted from individual frames which is not efficient due to the large variability of frames at different timestamps. In this paper, we propose a new sub-gesture modeling approach which represents each gesture as a sequence of fixed sub-gestures (a group of consecutive frames with locally coherent context) and provides a robust modeling of the visual features. We further extend this approach to the task of gesture spotting where the gesture boundaries are identified using a filler model and gesture completion model. Experimental results show that the proposed method outperforms state-of-the-art Hidden Conditional Random Fields (HCRF) based methods and baseline gesture spotting techniques.
international conference on human computer interaction | 2011
Dinesh Mandalapu; Sriram Subramanian
Pressure is a useful medium for interaction as it can be used in different contexts such as for navigating through depth in 3-D, for time-series visualizations, and in zoomable interfaces. We propose pressure based input as an alternative to repetitive multi-touch interactions, such as expanding/pinching to zoom. While most user interface controls for zooming or scrolling are bidirectional, pressure is primarily a one-way continuous parameter (from zero to positive). Human ability to control pressure from positive to zero is limited but needs to be resolved to make this medium accessible to various interactive tasks. We first carry out an experiment to measure the effectiveness of various pressure control functions for controlling pressure in both directions (from zero to positive and positive to zero). Based on this preliminary knowledge, we compare the performance of a pressure based zooming system with a multi-touch expand/pinch gesture based zooming system. Our results show that pressure input is an improvement to multi-touch interactions that involve multiple invocations, such as the one presented in this paper.
intelligent human computer interaction | 2012
Sriganesh Madhvanath; Dinesh Mandalapu; Tarun Madan; Naznin Rao; Ramesh Kozhissery
With touch-based interfaces becoming commonplace on personal computing devices ranging from phones and slates to notebook and desktop PCs, a number of common tasks that were once performed using mouse or keyboard input now need to be performed using fingers on the touch surface. Finger-drawn gestures offer a viable alternative to desktop and keyboard shortcuts as shortcuts for common tasks such as launching of applications and navigation of large media collections. In order to be truly effective, the interface for definition, management and invocation of gestures should be highly intuitive, and optimized for the device. In particular, the process of invoking gestures should be seamless and natural. Further, the recognition of gestures needs to be robust for the specific user. In this paper, we describe GeCCo (Gesture Command and Control), a system for personalized finger gesture shortcuts for touch-enabled desktops and trackpad-enabled notebook PCs. One of the key issues addressed in the design of GeCCo is that of mode switching in the context of notebook PCs. We describe a user study to decide between different interactions for mode switching. The interactions are designed such that mode switch and gesture can be simultaneously indicated. Since new gestures may be defined by the user at any time, statistical pattern classification techniques which require large numbers of training samples for each gesture are not useful. Instead we use nearest-neighbor classification with Dynamic Time Warping (DTW) distance, and a writer adaptation scheme for improving accuracy to desired levels. We conclude the paper with experimental results and some thoughts on next steps.
Archive | 2010
Anbumani Subramanian; Vinod Pathangay; Dinesh Mandalapu
Archive | 2011
Sitaram Ramachandrula; Dinesh Mandalapu; Suryaprakash Kompalli; Anjaneyulu Seetha Rama Kuchibhotla; Nagabhushana Ayyanahal Matad; Srinivasu Godavari; Geetha Manjunath
Archive | 2009
Muralikrishna Sridhar; Dinesh Mandalapu; Naveen Sundar Govindarajulu; Sriganesh Madhvanath
Archive | 2009
Dinesh Mandalapu; Anjaneyulu Seetha Rama Kuchibhotla; Sriganesh Madhvanath; Deepu Vijayasenan; Rama Vennelakanti
Archive | 2008
Sriganesh Madhvanath; Dinesh Mandalapu; Ajay Gupta; Shekhar Ramachandra Borgaonkar
Archive | 2008
Anjaneyulu Seetha Rama Kuchibhotla; Dinesh Mandalapu; Tracy K. Freeman; Kimberly A. Salisbury