Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasushi Mae is active.

Publication


Featured researches published by Yasushi Mae.


International Journal of Humanoid Robotics | 2013

HUMAN–ROBOT COLLISION AVOIDANCE USING A MODIFIED SOCIAL FORCE MODEL WITH BODY POSE AND FACE ORIENTATION

Photchara Ratsamee; Yasushi Mae; Kenichi Ohara; Tomohito Takubo; Tatsuo Arai

The ability of robots to understand human characteristics and make themselves socially accepted by humans are important issues if smooth collision avoidance between humans and robots is to be achieved. When discussing smooth collision avoidance, robot should understand not only physical components such as human position, but also social components such as body pose, face orientation and proxemics (personal space during motion). We integrated these components in a modified social force model (MSFM) which allows robots to predict human motion and perform smooth collision avoidance. In the modified model, short-term intended direction is described by body pose, and a supplementary force related face orientation is added for intention estimation. Face orientation is also the best indication of the direction of personal space during motion, which was verified in preliminary experiments. Our approach was implemented and tested on a real humanoid robot in a situation in which a human is confronted with the robot in an indoor environment. Experimental results showed that better human motion tracking was achieved with body pose and face orientation tracking. Being provided with the face orientation as an indication of the intended direction, and observing the laws of proxemics in a human-like manner, the robot was able to perform avoidance motions that were more human-like when compared to the original social force model (SFM) in a face-to-face confrontation.


intelligent robots and systems | 2013

Social navigation model based on human intention analysis using face orientation

Photchara Ratsamee; Yasushi Mae; Kenichi Ohara; Masaru Kojima; Tatsuo Arai

We propose a social navigation model that allows a robot to navigate in a human environment according to human intentions, in particular during a situation where the human encounters a robot and he/she wants to avoid, unavoid (maintain his/her course), or approach the robot. Avoiding, unavoiding, and approaching trajectories of humans are classified based on the face orientation on a social force model and their predicted motion. The proposed model is developed based on human motion and behavior (especially face orientation and overlapping personal space) analysis in preliminary experiments. Our experimental evidence demonstrates that the robot is able to adapt its motion by preserving personal distance from passers-by, and approaching persons who want to interact with the robot. This work contributes to the future development of a human-robot socialization environment.


international conference on mechatronics and automation | 2012

People tracking with body pose estimation for human path prediction

Photchara Ratsamee; Yasushi Mae; Kenichi Ohara; Tomohito Takubo; Tatsuo Arai

Prediction and observation of human motion are essential functions for robots co-existing with humans in everyday environments. We propose a people motion tracking and prediction approach by using the advantage of detailed 3D information about the positions of body joints. Using the shoulder position displayed in a geometrical skeleton diagram of a humans upper body part, the body pose from the proposed human kinematic model is estimated. Human motion tracking and path prediction are achieved via the extended Kalman Filter. The proposed method is verified in an indoor environment where humans pass by each other. Experiment results demonstrate that walking people and their body pose are robustly tracked and predicted accurately, with less occlusions compared to traditional human tracking.


international conference on robotics and automation | 2014

Autonomous acquisition of generic handheld objects in unstructured environments via sequential back-tracking for object recognition

Krishneel Chaudhary; Yasushi Mae; Masaru Kojima; Tatsuo Arai

Robots operating in human environments must have the ability to autonomously acquire object representations in order to perform object search and recognition tasks without human intervention. However, autonomous acquisition of object appearance model in an unstructured and cluttered human environment is a challenging task, since the object boundaries are unknown in prior. In this paper, we present a novel method to solve the problem of unknown object boundaries for handheld objects in an unstructured environment using robotic vision. The objective is to solve the problem of object segmentation without prior knowledge of the objects that human interacts with daily. In particular, we present a method that segments handheld objects by observing human-object interaction process, and performs incremental learning on the acquired models using SVM. The unknown object boundary is estimated using sequential back-tracking via exploitation of affine relationship of consecutive frames. The segmentation is achieved using identified optimal object boundaries, and the extracted models are used to perform future object search and recognition tasks.


international conference on robotics and automation | 2013

BMI-based learning system for appliance control automation

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

In this research we present a non-invasive Brain-Machine Interface (BMI) system that allows patients with motor paralysis conditions to control electronic appliances in a hospital room. The novelty of our system compared to other BMI applications is that our system gradually becomes autonomous by learning user actions (i.e. turning on/off window, lights, etc.) under certain environment conditions (temperature, illumination, etc.) and brain states (i.e. awake, sleepy, etc.). By providing learning capabilities to the system, patients are relieved from mental fatigue or stress caused by continuously controlling appliances using a BMI. We present an interface that allows the user to select and control appliances using electromyogram signals (EMG) generated by muscle contractions such as eyebrow movement. Our learning approach consists in two steps: 1) monitoring user actions, input data from sensors distributed around the room, and Electroencephalogram (EEG) data from the user, and 2) using an extended version of the Bayes Point Machine approach trained with Expectation Propagation to approximate a posterior probability from previously observed user actions under a similar combination of brain states and environmental conditions. Experimental results with volunteers demonstrate that our system provides satisfactory user experience and achieves over 85% overall learning performance after only a few trials.


international conference on mechatronics and automation | 2012

Social human behavior modeling for robot imitation learning

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

Social imitation learning is an essential skill that humans use to achieve social acceptance, increase awareness in unknown situations or to achieve cultural adaptation. In this work we address the problem of social imitation learning in a many-to-one learning scheme (group of humans to robot), where humans do not necessarily have teaching roles. Contrary to common imitation learning approaches based on one-to-one learning schemes with two agents (human teacher and robot student), our approach is inspired by social learning theory and consists in performing human behavior modeling by observing multiple humans while discovering common behavioral patterns. We propose a common framework for social behavior feature extraction that can be used to collect essential information of various social behaviors such as multi-person trajectory and multiple-body pose. Considering the fact that social imitation learning is shaped by stimuli of others behavior and the more individuals define the behavior, the more likely to engage in it; our modeling approach also considers a social force model that triggers social behavior learning when observing a group of people. Finally, collective behavior modeling is achieved by feature clustering using a Gaussian Mixture Model approach. Experimental results show that our approach is suitable for social human behavior modeling in situations such as emergency evacuation and Japanese style greeting (bowing).


Intelligent Service Robotics | 2013

Web-enhanced object category learning for domestic robots

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tomohito Takubo; Tatsuo Arai

We present a system architecture for domestic robots that allows them to learn object categories after one sample object was initially learned. We explore the situation in which a human teaches a robot a novel object, and the robot enhances such learning by using a large amount of image data from the Internet. The main goal of this research is to provide a robot with capabilities to enhance its learning while minimizing time and effort required for a human to train a robot. Our active learning approach consists of learning the object name using speech interface, and creating a visual object model by using a depth-based attention model adapted to the robot’s personal space. Given the object’s name (keyword), a large amount of object-related images from two main image sources (Google Images and the LabelMe website) are collected. We deal with the problem of separating good training samples from noisy images by performing two steps: (1) Similar image selection using a Simile Selector Classifier, and (2) non-real image filtering by implementing a variant of Gaussian Discriminant Analysis. After web image selection, object category classifiers are then trained and tested using different objects of the same category. Our experiments demonstrate the effectiveness of our robot learning approach.


ieee/sice international symposium on system integration | 2012

Software interface for controlling diverse robotic platforms using BMI

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

This paper presents a software interface that allows a user to control different types of robotic systems by using a Brain-Machine Interface. Unlike common device-specific BMI systems, our software architecture maps simple EEG-based commands to diverse functionalities depending on the robotic platform, so the user does not have learn to generate new EEG commands for different robots. The graphic user interface provides a mechanism that allows the user to navigate through menus using EMG signals (i.e. eye-blink), and then execute robot commands using EEG signals. Our software is based on a modular design that allows the integration of new robotic platforms with easy customization. Our current prototype explores the controllability of a humanoid robot, a flying robot and a pan-tilt robot using the proposed software interface. Experimental evidence shows that the system achieves good user satisfaction as well as easy controllability of different types of robotic systems.


intelligent robots and systems | 2013

Lifelogging keyframe selection using image quality measurements and physiological excitement features

Photchara Ratsamee; Yasushi Mae; Amornched Jinda-Apiraksa; Jana Machajdik; Kenichi Ohara; Masaru Kojima; Robert Sablatnig; Tatsuo Arai

Keyframe selection is the process of finding a representative frame in an image sequence. Although mostly known from video processing, keyframe selection faces new challenges in the lifelog domain. To obtain a keyframe that is close to a user-selected frame, we propose a keyframe selection method based on image quality measurements and excitement features. Image quality measurements such as contrast, color variance, sharpness, noise and saliency are used to filter high quality images. However, high quality images are not necessarily keyframes because humans also use emotions in the selection process. In this study, we employ a biosensor to measure the excitement of humans. In previous investigation, keyframe selection using only image quality measurements yielded an acceptance rate of 79.70%. Our proposed method achieves an acceptance rate of 84.45%.


robot and human interactive communication | 2012

2D spherical spaces for objects recognition under harsh lighting conditions

Amr Almaddah; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

For an object recognition task in an unknown environment, we propose a novel approach for illumination recovery of surface with cast shadows and specularities by using the object spherical spaces properties. Robust objects recognition in complex environment is fundamental to robot intelligence and manipulation. The proposed method is done for reducing the illumination effects on the objects detection and recognition processes. In this work, objects reference images are regenerated to match the scene lighting environment to increase the success rate of the recognition process. First, a database is generated by computing the albedo and surface normals from captured 2D images of the target objects. Next, the scene lighting direction and illumination coefficients are estimated. Finally, by using the calculated spherical spaces properties we regenerate objects reference data to match the search area illumination condition. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other techniques, our work requires no 3D models for the objects training process and takes images from a single camera as an input. Using our proposed 2D Spherical Spaces experimentally showed noticeable improvements in an objects identification task performed by an autonomous robot in a harshly illuminated environment.

Collaboration


Dive into the Yasushi Mae's collaboration.

Top Co-Authors

Avatar

Tatsuo Arai

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jana Machajdik

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge