Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravi Kiran Sarvadevabhatla is active.

Publication


Featured researches published by Ravi Kiran Sarvadevabhatla.


robot and human interactive communication | 2009

Learning together: ASIMO developing an interactive learning partnership with children

Sandra Y. Okita; Victor Ng-Thow-Hing; Ravi Kiran Sarvadevabhatla

Humanoid robots consist of biologically inspired features, human-like appearance, and intelligent behavior that naturally elicit social responses. Complex interactions are now possible, where children interact and learn from robots. A pilot study attempted to determine which features in robots led to changes in learning and behavior. Three common learning styles, lecture, cooperative, and self-directed, were implemented into ASIMO to see if children can learn from robots. General features such as monotone robot-like voice and human-like voice were compared. Thirty-seven children between the ages 4-to 10- years participated in the study. Each child engaged in a table-setting task with ASIMO that exhibited different learning styles and general features. Children answered questions in relation to a table-setting task with a learning measure. Promissory evidence shows that learning styles and general features matter especially for younger children.


ieee-ras international conference on humanoid robots | 2009

Panoramic attention for humanoid robots

Ravi Kiran Sarvadevabhatla; Victor Ng-Thow-Hing

In this paper, we present a novel three-layer model of panoramic attention for our humanoid robot. In contrast to similar architectures employing coarse discretizations of the panoramic field, saliencies are maintained only for cognitively prominent entities(e.g. faces). In the absence of attention triggers, an idle-policy makes the humanoid span the visual field of panorama imparting a human-like idle gaze while simultaneously registering attention-worthy entities. We also describe a model of cognitive panoramic habituation which maintains entity-specific persistence models, thus imparting lifetimes to entities registered across the panorama. This mechanism enables the memories of entities in the panorama to fade away, creating a human-like attentional effect. We describe scenarios demonstrating the aforementioned aspects. In addition, we present experimental results which demonstrate how the cognitive filtering aspect of our model reduces processing time and false-positive rates for standard entity related modules such as face-detection and recognition.


Ksii Transactions on Internet and Information Systems | 2011

Multimodal approach to affective human-robot interaction design with children

Sandra Y. Okita; Victor Ng-Thow-Hing; Ravi Kiran Sarvadevabhatla

Two studies examined the different features of humanoid robots and the influence on childrens affective behavior. The first study looked at interaction styles and general features of robots. The second study looked at how the robots attention influences childrens behavior and engagement. Through activities familiar to young children (e.g., table setting, story telling), the first study found that cooperative interaction style elicited more oculesic behavior and social engagement. The second study found that quality of attention, type of attention, and length of interaction influences affective behavior and engagement. In the quality of attention, Wizard-of-Oz (woz) elicited the most affective behavior, but automatic attention worked as well as woz when the interaction was short. The type of attention going from nonverbal to verbal attention increased childrens oculesic behavior, utterance, and physiological response. Affective interactions did not seem to depend on a single mechanism, but a well-chosen confluence of technical features.


IEEE Robotics & Automation Magazine | 2009

Cognitive map architecture

Victor Ng-Thow-Hing; Kristinn R. Thórisson; Ravi Kiran Sarvadevabhatla; Joel Wormer; Thor List

We have developed the Cognitive Map robot architecture that minimizes the amount of rewriting of existing legacy software for integration. The Cognitive Map can be thought of as a centralized information space for connected components to contribute both internal and environmental state information. We leverage several successfully proven concepts such as blackboard architectures and publish- subscribe based messaging to develop a flexible robot architecture that exhibits fault-tolerance, easily substituted components, and provides support for different structural paradigms such as subsumption, sense-plan-act and three-tier architectures. Our multicomponent distributed system has system components that are loosely coupled via message-passing and/or continuous data streams. This architecture was implemented on the humanoid robot ASIMO manufactured by Honda Motor Co., Ltd. We review various forms of communication middleware and component models. The Architecture section provides an overview of our architecture and considerations in its design. The Scenario Design section details the process from conceptualizing an interactive application to its instantiation in the robot architecture. The Components section singles out several important high-level components that play a significant role in many of our interactive scenarios. Finally, discussions and conclusions are presented.


intelligent robots and systems | 2008

The memory game: Creating a human-robot interactive scenario for ASIMO

Victor Ng-Thow-Hing; Jongwoo Lim; Joel Wormer; Ravi Kiran Sarvadevabhatla; Carlos Rocha; Kikuo Fujimura; Yoshiaki Sakagami

We present a human-robot interactive scenario consisting of a memory card game between Hondapsilas humanoid robot ASIMO and a human player. The game features perception exclusively through ASIMOpsilas on-board cameras and both reactive and proactive behaviors specific to different situational contexts in the memory game. ASIMO is able to build a dynamic environmental map of relevant objects in the game such as the table and card layout as well as understand activities from the player such as pointing at cards, flipping cards and removing them from the table. Our system architecture, called the Cognitive Map, treats the memory game as a multi-agent system, with modules acting independently and communicating with each other via messages through a shared blackboard system. The game behavior module can model game state and contextual information to make decisions based on different pattern recognition modules. Behavior is then sent through high-level command interfaces to be resolved into actual physical actions by the robot via a multi-modal communication module. The experience gained in modeling this interactive scenario will allow us to reuse the architecture to create new scenarios and explore new research directions in learning how to respond to new interactive situations.


human-robot interaction | 2012

Captain may i?: proxemics study examining factors that influence distance between humanoid robots, children, and adults, during human-robot interaction

Sandra Y. Okita; Victor Ng-Thow-Hing; Ravi Kiran Sarvadevabhatla

This proxemics study examines whether the physical distance between robots and humans differ based on the following factors: 1) age: children vs. adults, 2) who initiates the approach: humans approaching the robot vs. robot approaching humans, 3) prompting: verbal invitation vs. non-verbal gesture (e.g., beckoning), and 4) informing: announcement vs. permission vs. nothing. Results showed that both verbal and non-verbal prompting had significant influence on physical distance. Physiological data is also used to detect the appropriate timing of approach for a more natural and comfortable interaction.


robot and human interactive communication | 2010

Extended duration human-robot interaction: Tools and analysis

Ravi Kiran Sarvadevabhatla; Victor Ng-Thow-Hing; Sandra Y. Okita

Extended human-robot interactions possess unique aspects which are not exhibited in short-term interactions spanning a few minutes or extremely long-term spanning days. In order to comprehensively monitor such interactions, we need special recording mechanisms which ensure the interaction is captured at multiple spatio-temporal scales, viewpoints and modalities(audio, video, physio). To minimize cognitive burden, we need tools which can automate the process of annotating and analyzing the resulting data. In addition, we also require these tools to be able to provide a unified, multi-scale view of the data and help discover patterns in the interaction process. In this paper, we describe recording and analysis tools which are helping us analyze extended human-robot interactions with children as subjects. We also provide some experimental results which highlight the utility of such tools.


international conference on multimodal interfaces | 2011

Adaptive facial expression recognition using inter-modal top-down context

Ravi Kiran Sarvadevabhatla; Mitchel Benovoy; Sam Musallam; Victor Ng-Thow-Hing

The role of context in recognizing a persons affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.


Archive | 2012

A Multi-Modal Panoramic Attentional Model for Robots and Applications

Ravi Kiran Sarvadevabhatla; Victor Ng-Thow-Hing

Humanoid robots are becoming increasingly competent in perception of their surroundings and in providing intelligent responses to worldly events. A popular paradigm to realize such responses is the idea of attention itself. There are two important aspects of attention in the context of humanoid robots. First, perception describes how to design the sensory system to filter out useful salient features in the sensory field and perform subsequent higher level processing to perform tasks such as face recognition. Second, the behavioral response defines how the humanoid should act when it encounters the salient features. A model of attention enables the humanoid to achieve a semblance of liveliness that goes beyond exhibiting a mechanized repertoire of responses. It also facilitates progress in realizingmodels of higher-level cognitive processes such as having people direct the robot’s attention to a specific target stimulus(Cynthia et al., 2001). Studies indicate that humans employ attention as a mechanism for preventing sensory overload(Tsotsos et al., 2005),(Komatsu, 1994) – a finding which is relevant to robotics given that information bandwidth is often a concern. The neurobiologically inspired models of Itti (Tsotsos et al., 2005), initially developed for modeling visual attention have been improved(Dhavale et al., 2003) and their scope has been broadened to include even auditory modes of attention(Kayser et al., 2008). Such models have formed the basis of multi-modal attention mechanisms in (humanoid) robots(Maragos, 2008),(Rapantzikos, 2007). Typical implementations of visual attention mechanisms employ a bottom-up processing of camera images to arrive at the so-called saliency map, which encodes the unconstrained salience of the scene. Salient regions identified from saliency map are processed by higher-level modules such as object and face recognition. The results of these modules are then used as referential entities for the task at hand(e.g. acknowledging a familiar face, noting the location of a recognized object). Building upon the recent additions to Itti’s original model(Tsotsos et al., 2005), some implementations also use top-down control mechanisms to constrain the salience(Cynthia et al., 2001),(Navalpakkam and Itti, 2005),(Moren et al., 2008). In most of the implementations, the cameras are held fixed, simplifying processing and consequent attention mechanism modeling. However, this restricts the visual scope of attention, particularly in situations when the robot has to interact with multiple people who may be spread beyond its limited field-of-view. Moreover, they may choose to advertise their presence through a non-visual modality such as speech utterances. Attempts to overcome this situation lead naturally to the idea of widening the visual scope and therefore, to the idea of a panoramic attention. In most of the implementations which 11


Archive | 2009

Cognitive Map Architecture Facilitation of Human–Robot Interaction in Humanoid Robots

B Y Victor Ng-thow-hing; Kristinn R. Thórisson; Ravi Kiran Sarvadevabhatla; Joel Wormer; Thor List

Collaboration


Dive into the Ravi Kiran Sarvadevabhatla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thor List

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge