Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramadevi Vennelakanti is active.

Publication


Featured researches published by Ramadevi Vennelakanti.


intelligent human computer interaction | 2012

Freehand pose-based Gestural Interaction: Studies and implications for interface design

Dustin Freeman; Ramadevi Vennelakanti; Sriganesh Madhvanath

Most work on Freehand Gestural Interaction has focused on high-energy expressive interfaces for expert user. In this work, we examine the use of hand poses in laid-back freehand gestural interactions for novice users and examine the factors that impact gesture performance. Through two Wizard-of-Oz studies, one leading to the other, we observe how novice users behave under relaxed conditions. The first study explores the ease of use of a pose-based hand gesture vocabulary in the context of a photo-browsing task, and examines some of the key factors that impact the performance of such pose based gestures. The second explores pose-based interaction techniques for widget manipulation tasks. These studies reveal that while hand poses have the potential to expand the vocabulary of gestures and are easy to recall and use, there are a number of issues that show up in actual performance related to inadvertent modifications in hand pose and hand trajectories. We summarize the implications of these findings for the design of pose-based freehand gestural interfaces, which we believe would be useful for both interaction designers and gesture recognition researchers.


international conference on multimodal interfaces | 2012

Designing multiuser multimodal gestural interactions for the living room

Sriganesh Madhvanath; Ramadevi Vennelakanti; Anbumani Subramanian; Ankit Shekhawat; Prasenjit Dey; Amit Rajan

Most work in the space of multimodal and gestural interaction has focused on single user productivity tasks. The design of multimodal, freehand gestural interaction for multiuser lean-back scenarios is a relatively nascent area that has come into focus because of the availability of commodity depth cameras. In this paper, we describe our approach to designing multimodal gestural interaction for multiuser photo browsing in the living room, typically a shared experience with friends and family. We believe that our learnings from this process will add value to the efforts of other researchers and designers interested in this design space.


asia-pacific computer and human interaction | 2013

DESI: a virtual classroom system for distance education: a design exploration

Rafael Roballo Wanderley; Rafael Amaral; Victor Ximenes; Devendra Tewari; Sriganesh Madhvanath; Ramadevi Vennelakanti

Distance education (DE) today is mostly in broadcast mode, where classes are broadcast over networks for students to consume. While the instruction mode has remained close to the physical classroom metaphor, what is often lacking is the rich two way interaction that happens between students and the teacher and between the students in a physical classroom. In this paper, we describe the design of DESI, a virtual classroom system for DE. We consider the scenario wherein a student is attending a class at home on a PC, or an internet-connected TV, and explore the use of student and teacher interfaces that promote classroom interaction and integrate multimodal interactions to enable richer and more interactive virtual classroom experiences. We briefly describe the software architecture of the DESI system, and present preliminary results from testing an early version of the system with end users. Our work is relevant to distance education on TV broadcast networks, online classrooms, and enterprise collaboration and e-learning systems.


International Conference on Intelligent Interactive Technologies and Multimedia | 2013

Factors of Influence in Co-located Multimodal Interactions

Ramadevi Vennelakanti; Anbumani Subramanian; Sriganesh Madhvanath; Prasenjit Dey

Most work on multimodal interaction in the human computer interaction (HCI) space has focused on enabling a user to use one or more modalities in combination to interact with a system. However, there is still a long way to go towards making human-to-machine communication as rich and intuitive as human-to-human communication. In human-to-human communication, modalities are used individually, simultaneously, interchangeably or in combination. The choice of modalities is dependent on a variety of factors including the context of conversation, social distance, physical proximity, duration, etc. We believe such intuitive multimodal communication is the direction in which human-to-machine interaction is headed in the future. In this paper, we present the insights we have from studying current human-machine interaction methods. We carried out an ethnographic study to observe and study users in their homes as they interacted with media and media devices, by themselves and in small groups. One of the key learning we have from this study is the understanding of the impact of the user’s context on the choice of interaction modalities. The user context factors that influence the choice of interaction modalities include, but are not limited to: the distance of the user from the device/media, the user’s body posture during the media interaction, the user’s involvement level with the media, seating patterns (cluster) of the co-located participants, the roles that each participant plays, the notion of control among the participants, duration of the activity and so on. We believe that the insights from this study can inform the design of the next generation multimodal interfaces that are sensitive to user context, perform a robust interpretation of the interaction inputs and support more human-like multimodal interaction.


Archive | 2010

System and method for distinguishing multimodal commands directed at a machine from ambient human communications

Ramadevi Vennelakanti; Prasenjit Dey


international conference on multimodal interfaces | 2011

The picture says it all!: multimodal interactions and interaction metadata

Ramadevi Vennelakanti; Prasenjit Dey; Ankit Shekhawat; Phanindra Pisupati


Archive | 2011

Multimodal interactions based on body postures

Ramadevi Vennelakanti; Anbumani Subramanian; Prasenjit Dey; Sriganesh Madhvanath; Dinesh Mandalapu


Archive | 2010

System and method for using information from intuitive multimodal interactions for media tagging

Ramadevi Vennelakanti; Prasenjit Dey; Sriganesh Madhvanath


international conference on multimodal interfaces | 2012

Pixene: creating memories while sharing photos

Ramadevi Vennelakanti; Sriganesh Madhvanath; Anbumani Subramanian; Ajith Sowndararajan; Arun David; Prasenjit Dey


Archive | 2012

HAND POSE INTERACTION

Dustin Freeman; Sriganesh Madhvanath; Ankit Shekhawat; Ramadevi Vennelakanti

Collaboration


Dive into the Ramadevi Vennelakanti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge