Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anbumani Subramanian is active.

Publication


Featured researches published by Anbumani Subramanian.


international conference on pattern recognition | 2010

Dynamic Hand Pose Recognition Using Depth Data

Poonam Suryanarayan; Anbumani Subramanian; Dinesh Mandalapu

Hand pose recognition has been a problem of great interest to the Computer Vision and Human Computer Interaction community for many years and the current solutions either require additional accessories at the user end or enormous computation time. These limitations arise mainly due to the high dexterity of human hand and occlusions created in the limited view of the camera. This work utilizes the depth information and a novel algorithm to recognize scale and rotation invariant hand poses dynamically. We have designed a volumetric shape descriptor enfolding the hand to generate a 3D cylindrical histogram and achieved robust pose recognition in real time.


computer vision computer graphics collaboration techniques | 2011

Real-time upper-body human pose estimation using a depth camera

Himanshu Prakash Jain; Anbumani Subramanian; Sukhendu Das; Anurag Mittal

Automatic detection and pose estimation of humans is an important task in Human-Computer Interaction (HCI), user interaction and event analysis. This paper presents a model based approach for detecting and estimating human pose by fusing depth and RGB color data from monocular view. The proposed system uses Haar cascade based detection and template matching to perform tracking of the most reliably detectable parts namely, head and torso. A stick figure model is used to represent the detected body parts. The fitting is then performed independently for each limb, using the weighted distance transform map. The fact that each limb is fitted independently speeds-up the fitting process and makes it robust, avoiding the combinatorial complexity problems that are common with these types of methods. The output is a stick figure model consistent with the pose of the person in the given input image. The algorithm works in real-time and is fully automatic and can detect multiple non-intersecting people.


2011 IEEE Workshop on Person-Oriented Vision | 2011

Augmented reality for immersive remote collaboration

Dan Gelb; Anbumani Subramanian; Kar-Han Tan

Video conferencing systems are designed to deliver a collaboration experience that is as close as possible to actually meeting in person. Current systems, however, do a poor job of integrating video streams presenting the users with shared collaboration content. Real and virtual content are unnaturally separated, leading to problems with nonverbal communication and the overall conference experience. Methods of interacting with shared content are typically limited to pointing with a mouse, which is not a natural component of face-to-face human conversation. This paper presents a natural and intuitive method for sharing digital content within a meeting using augmented reality and computer vision. Real and virtual content is seamlessly integrated into the collaboration space. We develop new vision based methods for interacting with inserted digital content including target finding and gesture based control. These improvements let us deliver an immersive collaboration experience using natural gesture and object based interaction.


international conference on multimodal interfaces | 2012

Designing multiuser multimodal gestural interactions for the living room

Sriganesh Madhvanath; Ramadevi Vennelakanti; Anbumani Subramanian; Ankit Shekhawat; Prasenjit Dey; Amit Rajan

Most work in the space of multimodal and gestural interaction has focused on single user productivity tasks. The design of multimodal, freehand gestural interaction for multiuser lean-back scenarios is a relatively nascent area that has come into focus because of the availability of commodity depth cameras. In this paper, we describe our approach to designing multimodal gestural interaction for multiuser photo browsing in the living room, typically a shared experience with friends and family. We believe that our learnings from this process will add value to the efforts of other researchers and designers interested in this design space.


international conference on human computer interaction | 2011

Counting on your fingertips: an exploration and analysis of actions in the Rich Touch space

Rama Vennelakanti; Anbumani Subramanian; Sriganesh Madhvanath; Sriram Subramanian

Although multi-touch technology and horizontal interactive surfaces have been around for a decade now, there is limited understanding of how users use the Rich Touch space and multiple fingers to manipulate objects on a table. In this paper, we describe the findings and insights from an observational study on how users manipulate photographs on a physical table surface. Through a detailed video analysis based on images captured from four distinct cameras we investigate the various actions users perform, and various aspects of these actions, such as the number of fingers, the space of action, and handedness. Our investigation shows that user interactions can be described in terms of a small set of actions, and there are insightful ways in which hands are used, and number of finger used to carry out these actions. These insights may in turn be used to inform the design of future interactive surfaces, and improve the accuracy of interpreting these actions.


International Conference on Intelligent Interactive Technologies and Multimedia | 2013

Factors of Influence in Co-located Multimodal Interactions

Ramadevi Vennelakanti; Anbumani Subramanian; Sriganesh Madhvanath; Prasenjit Dey

Most work on multimodal interaction in the human computer interaction (HCI) space has focused on enabling a user to use one or more modalities in combination to interact with a system. However, there is still a long way to go towards making human-to-machine communication as rich and intuitive as human-to-human communication. In human-to-human communication, modalities are used individually, simultaneously, interchangeably or in combination. The choice of modalities is dependent on a variety of factors including the context of conversation, social distance, physical proximity, duration, etc. We believe such intuitive multimodal communication is the direction in which human-to-machine interaction is headed in the future. In this paper, we present the insights we have from studying current human-machine interaction methods. We carried out an ethnographic study to observe and study users in their homes as they interacted with media and media devices, by themselves and in small groups. One of the key learning we have from this study is the understanding of the impact of the user’s context on the choice of interaction modalities. The user context factors that influence the choice of interaction modalities include, but are not limited to: the distance of the user from the device/media, the user’s body posture during the media interaction, the user’s involvement level with the media, seating patterns (cluster) of the co-located participants, the roles that each participant plays, the notion of control among the participants, duration of the activity and so on. We believe that the insights from this study can inform the design of the next generation multimodal interfaces that are sensitive to user context, perform a robust interpretation of the interaction inputs and support more human-like multimodal interaction.


Archive | 2010

HAND GESTURE RECOGNITION

Anbumani Subramanian; Vinod Pathangay; Dinesh Mandalapu


Archive | 2010

SYSTEM AND METHOD FOR POINT, SELECT AND TRANSFER HAND GESTURE BASED USER INTERFACE

Vinod Pathangay; Anbumani Subramanian


Archive | 2008

Correction of distortion in captured images

Prasenjit Dey; Anbumani Subramanian


Archive | 2011

Hand pose recognition

Yogesh Sankarasubramaniam; Krusheel Munnangi; Anbumani Subramanian

Collaboration


Dive into the Anbumani Subramanian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge