Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoru Satake is active.

Publication


Featured researches published by Satoru Satake.


human-robot interaction | 2009

How to approach humans?: strategies for social robots to initiate interaction

Satoru Satake; Takayuki Kanda; Dylan F. Glas; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita

This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robots presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robots performance in initiating conversations.


intelligent robots and systems | 2009

Field trial of networked social robots in a shopping mall

Masahiro Shiomi; Takayuki Kanda; Dylan F. Glas; Satoru Satake; Hiroshi Ishiguro; Norihiro Hagita

This paper reports the challenges of developing multiple social robots that operate in a shopping mall. We developed a networked robot system that coordinates multiple social robots and sensors to provide efficient service to customers. It directs the tasks of robots based on their positions and peoples walking behavior, manages the paths of robots, and coordinates the conversation-performance between two robots. Laser range finders were distributed in the environment to estimate peoples positions. The system estimates such human walking behaviors as “stopping” or “idle walking” to direct robots to provide appropriate tasks to appropriate people. Each robot interacts with people to provide recommendation information and route information about shops. The system sometimes simultaneously uses two robots to lead people from one place to another. The field trial, which was conducted in a shopping mall where four robots interacted with 414 people, revealed the effectiveness of the network robot system for guiding people around a shopping mall as well as increasing their interest.


human-robot interaction | 2012

How do people walk side-by-side?: using a computational model of human behavior for a social robot

Yoichi Morales; Satoru Satake; Rajibul Huq; Dylan F. Glas; Takayuki Kanda; Norihiro Hagita

This paper presents a computational model for side-by-side walking for human-robot interaction (HRI). In this work we address the importance of future motion utility (motion anticipation) of the two walking partners. Previous studies only considered a robot moving alongside a person without collisions with simple velocity-based predictions. In contrast, our proposed model includes two major considerations. First, it considers the current goal, modeling side-by-side walking, as a process of moving towards a goal while maintaining a relative position with the partner. Second, it takes the partners utility into consideration; it models side-by-side walking as a phenomenon where two agents maximize mutual utilities rather than only considering a single agent utility. The model is constructed and validated with a set of trajectories from pairs of people recorded in side-by-side walking. Finally, our proposed model was tested in an autonomous robot walking side-by-side with participants and demonstrated to be effective.


robotics: science and systems | 2011

An Interaction Design Framework for Social Robots.

Dylan F. Glas; Satoru Satake; Takayuki Kanda; Norihiro Hagita

We present a novel design framework enabling the development of social robotics applications by cross-disciplinary teams of programmers and interaction designers. By combining a modular back-end software architecture with an easy-to-use graphical interface for developing interaction sequences, this system enables programmers and designers to work in parallel to develop robot applications and tune the subtle details of social behaviors. In this paper, we describe the structure of our design framework, and we present an experimental evaluation of our system showing that it increases the effectiveness of programmerdesigner teams developing social robot applications. Keywords-human-robot interaction; software development; social robotics; programming interfaces;


Mobile Information Systems | 2008

Home-Explorer: Ontology-based physical artifact search and hidden object detection system

Bin Guo; Satoru Satake; Michita Imai

A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects) in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certain conditions, smart objects become hidden ones. However, current systems provide no mechanism to detect and manage objects when this problem occurs. Second, there is no common context infrastructure for building smart artifact systems, which makes it difficult for separately developed applications to interact with each other and uneasy for them to share and reuse knowledge. Unlike previous systems, Home-Explorer builds on an ontology-based knowledge infrastructure named Sixth-Sense, which makes it easy for the system to interact with other applications or agents also based on this ontology. The hidden object problem is also reflected in our ontology, which enables Home-Explorer to deal with both smart objects and hidden objects. A set of rules for deducing an objects status or location information and for locating hidden objects are described and evaluated.


IEEE Transactions on Robotics | 2013

A Robot that Approaches Pedestrians

Satoru Satake; Takayuki Kanda; Dylan F. Glas; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita

When robots serve in urban areas such as shopping malls, they will often be required to approach people in order to initiate service. This paper presents a technique for human–robot interaction that enables a robot to approach people who are passing through an environment. For successful approach, our proposed planner first searches for a target person at public distance zones anticipating his/her future position and behavior. It chooses a person who does not seem busy and can be reached from a frontal direction. Once the robot successfully approaches the person within the social distance zone, it identifies the persons reaction and provides a timely response by coordinating its body orientation. The system was tested in a shopping mall and compared with a simple approaching method. The result demonstrates a significant improvement in approaching performance; the simple method was only 35.1% successful, whereas the proposed technique showed a success rate of 55.9%.


human-robot interaction | 2011

Modeling environments from a route perspective

Luis Yoichi Morales Saiki; Satoru Satake; Takayuki Kanda; Norihiro Hagita

Environment attributes are perceived or remembered differently according to the perspective used. In this study, two different perspectives, a survey perspective and a route perspective, are explained and discussed. This paper proposes an approach for modeling human environments from a route perspective, which is the perspective used when a human navigates through the environment. The process for route perspective semi-autonomous data extraction and modeling by a mobile robot equipped with a laser sensor and a camera is detailed. Finally, as an example of a route perspective application, a route direction robot was developed and tested in a real mall environment. Experimental results show the advantages of the proposed route perspective model compared with a survey perspective approach. Moreover, the route model is comparable to the performance of an expert person giving route guidance in the mall.


ieee wic acm international conference on intelligent agent technology | 2006

Sixth-Sense: Context Reasoning for Potential Objects Detection in Smart Sensor Rich Environment

Bin Guo; Satoru Satake; Michita Imai

A new system named Sixth-Sense is proposed for obtaining physical world information in smart space. Our view is object-centered and sensors are attached to several objects in the space. What we focus is to use the smart sensor attached objects to obtain information from no sensor attached objects, i.e., the so called potential objects detection problem. The Sixth-Sense resolves this problem by Sixth-Sense Skeleton and inference rules. A series of inference rules are presented and illustrated.


data management for sensor networks | 2006

MeT: a real world oriented metadata management system for semantic sensor networks

Hideyuki Kawashima; Yutaka Hirota; Satoru Satake; Michita Imai

A semantic sensor network describes the physical world using the metadata obtained from a sensor network. In this paper, we present our design and implementation of MeT, a real world oriented metadata management system for semantic sensor networks. MeT generates metadata either statically based on the application semantics, or dynamically using inference rules. It then constructs relationships between physical objects through these metadata. The sensor devices in MeT include RFID tags, position sensors and Motes. The users can query MeT about physical objects through either a Graphic User Interace, or Robovie, a communication robot. We give several concrete examples of querying physical objects in MeT to demonstrate the functionality of our system.


human-robot interaction | 2014

Destination unknown: walking side-by-side without knowing the goal

Ryo Murakami; Luis Yoichi Morales Saiki; Satoru Satake; Takayuki Kanda; Hiroshi Ishiguro

Walking side by side is a common situation when we go from one place to another with another person while talking. Our previous study reported a basic mechanism for side-by-side walking, but in the previous model it was crucial that each agent knew where he or she was going, i.e. the route to the destination. However, we have recognized the need to model the situation where one of the agents does not know the destination. The extended model presented in this work has two states: leader-follower state and collaborative state. Depending on whether the follower agent has obtained a reliable estimate of the route to follow, the walking agents transition between the two states. The model is calibrated with trajectories acquired from pairs of people walking side by side, and then it is tested in a human-robot interaction scenario. The results demonstrate that the new extended model achieves better side-by-side performance than a standard method without knowledge of the subgoal.

Collaboration


Dive into the Satoru Satake's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Guo

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Kotaro Hayashi

Tokyo University of Agriculture and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge