Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yeow Kee Tan is active.

Publication


Featured researches published by Yeow Kee Tan.


International Journal of Social Robotics | 2011

Towards an Effective Design of Social Robots

Haizhou Li; John-John Cabibihan; Yeow Kee Tan

In the past decade, we have witnessed an intensive progress of technology spurred on by community initiatives. The guest editors had its first meeting in 2010 to find that it was timely for the community to collectively document the recent advances in social robotics through a special issue in the International Journal of Social Robotics. This special issue presents various designs for social robots in order to make them more effective and natural for human-robot interactions. Social robots are autonomous robots that are able to interact and communicate among themselves, with humans, and with the environment and are designed to operate according to the established social and cultural norms. The main requirement for such robots is intelligence, which forms the basis of human-robot interaction. Many design requirements should be considered for social robots. Among these are the abilities to:


systems man and cybernetics | 2012

Robust Multiperson Detection and Tracking for Mobile Service and Social Robots

Liyuan Li; Shuicheng Yan; Xinguo Yu; Yeow Kee Tan; Haizhou Li

This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.


international conference on computer graphics and interactive techniques | 2012

Attention-based addressee selection for service and social robots to interact with multiple persons

Liyuan Li; Qianli Xu; Yeow Kee Tan

For robots to perform interaction with multiple persons, they have to be able to identify the addressees to interact with. We classify the methods of addressee detection and selection into two categories, namely, passive and active approaches. For passive approaches, the robot is programmed to detect a predefined signal, e.g., a voice command or a specific gesture, from a person who is supposed to be the addressee. In contrast, for active approaches, the robot is able to select a person as an addressee based on subtle cues that are inferred from the human pose, gaze, and facial expression. We present two new approaches for attention-based addressee selection, one is a passive method and the other is an active method. The passive method is designed for the robot to recognize common hand-waving gesture, where a Bayessian ensemble approach is proposed to fuse hand detections from depth segmentation, palm shape, skin color, and body pose. The active method is developed for the robot to perform natural interaction with multiple persons. It employs a novel human attention estimation algorithm based on human detection, tracking, upper body pose recogni-tion, face detection, gaze detection, lip motion analysis, and facial expression recognition. Extensive experiments have been conducted and the effectiveness of the proposed approaches is reported.


international conference on engineering psychology and cognitive ergonomics | 2013

When Stereotypes Meet Robots: The Effect of Gender Stereotypes on People's Acceptance of a Security Robot

Benedict Tiong Chee Tay; Taezoon Park; Younbo Jung; Yeow Kee Tan; Alvin Hong Yee Wong

A recent development of social robotics suggests the integration of human characteristics social robots, which allows a more natural interaction between users and these social robots targeting better task performance and greater user acceptance to such social robots. It is interesting to note that the recent successful integration of human characteristics has brought an overarching research paradigm, known as Computers Are Social Actors CASA theory which suggests that people react and respond to computers and robots, often similar to the way they treat another social entities. Based on the research paradigm of CASA theory, this study further examined the impact of gender-related role stereotypes on the assessment of a social robot in a particular occupation. Though previous research in social science found that stereotyping makes a significant influence on personal decisions, involving career promotion, development, and supervision, as well as personal competence evaluations, limited insights has been found in HRI research. A between-subject experiment was conducted with 40 participants gender balanced at a public university in Singapore to investigate the effect of gender-related role stereotypes on user acceptance of a social robot as a security guard. Largely within our expectations, the results also showed that users perceived the security robot with matching gender-related role stereotypes more useful and acceptable than the mismatched security robot as a second-degree social response.


human-robot interaction | 2012

Vision-based attention estimation and selection for social robot to perform natural interaction in the open world

Liyuan Li; Xinguo Yu; Jun Li; Gang S. Wang; Ji Yu Shi; Yeow Kee Tan; Haizhou Li

In this paper, a novel vision system is proposed to estimate attention of people from rich visual clues for social robot to perform natural interactions with multiple participants in public environments. The vision detection and recognition modules include multi-person detection and tracking, upper-body pose recognition, face and gaze detection, lip motion analysis for speaking recognition, and facial expression recognition. A computational approach is proposed to generate a quantitative estimation of human attention. The vision system is implemented on a robotic receptionist “EVE” and encouraging results have been obtained.


Natural Interaction with Robots, Knowbots and Smartphones, Putting Spoken Dialog Systems into Practice | 2014

Component Pluggable Dialogue Framework and Its Application to Social Robots

Ridong Jiang; Yeow Kee Tan; Dilip Kumar Limbu; Tran Anh Dung; Haizhou Li

This paper is concerned with the design and development of a component pluggable event-driven dialogue framework for service robots. We abstract standard dialogue functions and encapsulate them into different types of components or plug-ins. A component can be a hardware device, a software module, an algorithm or a database connection. The framework is empowered by a multipurpose XML-based dialogue engine, which is capable for pipeline information flow construction, event mediation, multi-topic dialogue modeling and different types of knowledge representation. The framework is domain-independent, cross-platform, and multilingual. Experiments on various service robots in our social robotics laboratory showed that the same framework works for all the robots that need speech interface. The development cycle for new dialogue system is greatly shortened while the system robustness, reliability, and maintainability are significantly improved.


robotics and biomimetics | 2011

A software architecture framework for service robots

Dilip Kumar Limbu; Yeow Kee Tan; Ridong Jiang; Tran Ang Dung

Service robot is a complex system, which demands a highly flexible, extensible, and maintainable software architecture framework to achieve its goals/tasks. This paper describes a software architecture framework designed to fulfill these needs. To date, the architecture has been successfully applied to the development of different service robots; Olivia, Mika and Lucas developed by A⋆STAR Social Robotics Laboratory (ASORO). These service robots have different hardware and software requirements and are exhibited in various events, including RoboCup 2010, World Cities Summit and TechFest 2010. During the course of these events, these robots successfully demonstrated the functionality of the software architecture that seamlessly integrates various robotics components in completing a challenging real-world task. Most significantly, the architecture demonstrated its flexibility, extensibility, and maintainability to different domains. These outcomes indicate that the architecture has potential to contribute towards the long term goal of intelligent service robotics in different service domains.


FIRA RoboWorld Congress | 2009

Experiences with a Barista Robot, FusionBot

Dilip Kumar Limbu; Yeow Kee Tan; Chern Yuen Wong; Ridong Jiang; Hengxin Wu; Liyuan Li; Eng Hoe Kah; Xinguo Yu; Dong Li; Haizhou Li

In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.


international conference on smart homes and health telematics | 2013

Evaluation of the Pet Robot CuDDler Using Godspeed Questionnaire

Yeow Kee Tan; Alvin Hong Yee Wong; Anthony Wong; Tran Anh Dung; Adrian Tay; Dilip Limbu Kumar; Tran Huy Dat; Weng Zheng Ng; Rui Yan; Benedict Tiong Chee Tay

This work is to report the development of an interactive pet companion, CuDDler, with the capability to recognize verbal and non-verbal communication acts that are tied to the emotional state of a person. CuDDler, is a cuddly and affectionate interactive robotic pet companion and around the clock companionship to the elderly. This will positively impact their emotional wellbeing and their quality-of-life, ultimately leading to improved overall health and reduced healthcare cost. A study using Godspeed questionnaire [16] was carried out on general public in an exhibition technology road-show. The experiment indicated high ratings of the Godspeed attribute in “likeability” and “perceived safety”.


international conference on human computer interaction | 2009

An Interactive Robot Butler

Yeow Kee Tan; Dilip Limbu Kumar; Ridong Jiang; Liyuan Li; Kah Eng Hoe; Xinguo Yu; Li Dong; Chern Yuen Wong; Haizhou Li

This paper describes a novel robotic butler, developed by a multi-disciplinary team of researchers. The robotic butler is capable of detecting and tracking human, recognize hand gestures, serving beverages and performs dialog conversation with guest about their interests and their preferences; and providing specific information on the facilities at Fusionopolis building and various technologies used by the robot. The robot employs an event driven dialogue management system (DMS) architecture, speech recognition, ultra wideband, vision understanding and radio frequency identification. All these components and agents that are integrated in the DMS architecture are modular and can be re-used by other applications. In this paper, we will first describe the design concept and the architecture of the robotic butler. Secondly, we will describe in detail the workings of the speech and vision technology as this paper mainly focuses on human-robot interaction aspects of the social robot. Lastly, this paper will highlight some key challenges that were faced during the implementation of speech and vision technology into the robot.

Collaboration


Dive into the Yeow Kee Tan's collaboration.

Top Co-Authors

Avatar

Haizhou Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manfred Tscheligi

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge