Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takamasa Iio is active.

Publication


Featured researches published by Takamasa Iio.


PLOS ONE | 2015

Effectiveness of Social Behaviors for Autonomous Wheelchair Robot to Support Elderly People in Japan

Masahiro Shiomi; Takamasa Iio; Koji Kamei; Chandraprakash Sharma; Norihiro Hagita

We developed a wheelchair robot to support the movement of elderly people and specifically implemented two functions to enhance their intention to use it: speaking behavior to convey place/location related information and speed adjustment based on individual preferences. Our study examines how the evaluations of our wheelchair robot differ when compared with human caregivers and a conventional autonomous wheelchair without the two proposed functions in a moving support context. 28 senior citizens participated in the experiment to evaluate three different conditions. Our measurements consisted of questionnaire items and the coding of free-style interview results. Our experimental results revealed that elderly people evaluated our wheelchair robot higher than the wheelchair without the two functions and the human caregivers for some items.


intelligent robots and systems | 2009

Lexical entrainment in human-robot interaction: Can robots entrain human vocabulary?

Takamasa Iio; Masahiro Shiomi; Kazuhiko Shinozawa; Takahiro Miyashita; Takaaki Akimoto; Norihiro Hagita

A communication robot must recognize a referred-to object to support us in daily life. However, using our wide human vocabulary, we often refer to objects in terms that are incomprehensible to the robot. This paper focuses on lexical entrainment to solve this problem. Lexical entrainment is the phenomenon of people tending to adopt the terms of their interlocutor. While this has been well studied in human-computer interaction, few published papers have approached it in human-robot interaction. To investigate how lexical entrainment occurs in human-robot interaction, we conduct experiments where people instruct the robot to move objects. Our results show that two types of lexical entrainment occur in human-robot interaction. We also discuss the effects of the state of objects on lexical entrainment. Finally, we developed a test bed system for recognizing a referred-to object on the basis of knowledge from our experiments.


International Journal of Social Robotics | 2015

Lexical Entrainment in Human Robot Interaction

Takamasa Iio; Masahiro Shiomi; Kazuhiko Shinozawa; Katsunori Shimohara; Mitsunori Miki; Norihiro Hagita

This paper reveals that lexical entrainment, in which a person tends to change her verbal expressions to match those said by her addressee, occurs in interactions between people and a robot when they refer to an object in a shared physical space. Many studies argue that lexical entrainment is crucial for understanding the principle of human dialogue and the development of the natural language interfaces of artificial media. However, few studies of it exist in human robot interaction in which humans and robot share a physical space. If lexical entrainment occurs in situations where a physical space is shared with a robot, such findings will contribute to the development of natural language interfaces with social robots. We designed experimental tests in which participants refer to an object and a robot confirms it and measured the extent to which the participants repeated the same verbal expressions said by the robot. Our subjects tended to adopt both the same verbal expressions and lexical categories as the robot.


International Journal of Social Robotics | 2011

Investigating Entrainment of People’s Pointing Gestures by Robot’s Gestures Using a WOZ Method

Takamasa Iio; Masahiro Shiomi; Kazuhiko Shinozawa; Takaaki Akimoto; Katsunori Shimohara; Norihiro Hagita

A person’s behavior tends to be similar to that of the robot with which he or she is interacting, and this tendency is called entrainment. In this paper, we reveal the robot gestures that are important to the entrainment of people’s pointing gestures. We conducted a laboratory experiment using a Wizard-of-Oz paradigm in which a participant and a robot have an object-reference conversation. The frequencies of a participant’s pointing gestures in conversation were compared among three conditions: (1) gazing, (2) pointing and (3) gazing&pointing conditions. The results show that the participants in the gazing&pointing condition used the most pointing gestures, while those in the pointing condition used the fewest pointing gestures. This suggests that a robot’s gazing gestures play an important role in the entrainment of people’s pointing gestures.


human-agent interaction | 2016

Communication Cues in a Human-Robot Touch Interaction

Takahiro Hirano; Masahiro Shiomi; Takamasa Iio; Mitsuhiko Kimoto; Takuya Nagashio; Ivan Tanev; Katsunori Shimohara; Norihiro Hagita

Haptic interaction is a key capability for social robots that closely interact with people in daily environments. Such human communication cues as gaze behaviors make haptic interaction look natural. Since the purpose of this study is to increase human-robot touch interaction, we conducted an experiment with 20 participants who interacted with a robot with different combinations of gaze behaviors and touch styles. The experimental results showed that both gaze behaviors and touch styles influence the changes in the perceived feelings of touch interaction with a robot.


Advanced Robotics | 2016

Social acceptance by senior citizens and caregivers of a fall detection system using range sensors in a nursing home

Takamasa Iio; Masahiro Shiomi; Koji Kamei; Chandraprakash Sharma; Norihiro Hagita

Abstract We developed a fall detection system with a status view function using range sensors in nursing homes and investigated how it was evaluated by seniors and caregivers about their intention to use and feelings of security. Our system calculates the positions and heights of seniors using range sensors for falling detection and sends an alert to the terminals of caregivers. Moreover, the system sends silhouette images from the range sensors that display the largest appearance of a person to the terminal to provide detailed information of seniors for the caregivers. In user evaluation, seniors and caregivers watched the three videos: simulated out-of-bed sensor, fall detection under constant observation and fall detection without constant observation. Participants answered questionnaires and were interviewed after watching each video. As a result, the seniors indicated significantly higher intention to use and feelings of security in the second and third videos than in the first video. Most seniors could accept being constantly monitored by the caregivers because they deemed safety to be more important than privacy. A few seniors (often healthy individuals) felt nervous under constant observation. Caregivers commented on the importance of flexibly switching the functions of the fall detection system to reflect individual status.


robot and human interactive communication | 2015

Improvement of object reference recognition through human robot alignment

Mitsuhiko Kimoto; Takamasa Iio; Masahiro Shiomi; Ivan Tanev; Katsunori Shimohara; Norihiro Hagita

This paper reports an interactive approach to improve the recognition performance by robots of objects indicated by humans during human-robot interaction. We developed an approach based on two findings in conversations where a human refers to an object, which is confirmed by a robot. First, humans tend to use the same words or gestures as the robot in a phenomenon called alignment. Second, humans tend to decrease the amount of information in their references when the robot uses excess information in its confirmations: in other words, alignment inhibition. These findings lead to the following design; a robot should use enough information without being excessive to identify objects to improve recognition accuracy because humans will eventually use similar information to refer to those objects by alignment. If humans more frequently use the same information to identify objects, the robot can more easily recognize those being indicated by humans. To verify our design, we developed a robotic system to recognize the objects to which humans referred and conducted a control experiment that had 2 × 3 conditions; one factor was the robots confirmation way and another was the arrangement of the objects. The first factor had two levels to identify objects: enough information and excess information. The second factor had three levels: congestion, two groups, and a sparse set. We measured the recognition accuracy of the objects humans referred to and the amount of information in their references. The success rate of the recognition and information amount was higher in the adequate information condition than in the excess condition in a particular situation. The results suggested the possibility that our proposed interactive approach improved recognition performance.


international conference on social robotics | 2010

Entrainment of pointing gestures by robot motion

Takamasa Iio; Masahiro Shiomi; Kazuhiko Shinozawa; Takaaki Akimoto; Katsunori Shimohara; Norihiro Hagita

Social robots need to recognize the objects indicated by people to work in real environments. This paper presents the entrainment of human pointing gestures during interaction with a robot and investigated what robot gestures are important for such entrainment. We conducted a Wizard-of-Oz experiment where a person and a robot referred to objects and evaluated the entrainment frequency. The frequency was lowest when the robot just used pointing gestures, and the frequency was highest when it used both gazing and pointing gestures. These result suggest that not only robot pointing gestures but also gazing gestures affect entrainment. We conclude that the entrainment of pointing gestures might improve a robots ability to recognize them.


society of instrument and control engineers of japan | 2008

Evolutionary adaptive behavior in noisy multi-agent system

Takamasa Iio; Ivan Tanev; Katsunori Shimohara

In this paper, we discuss a relationship between perceptual noise and fitness of agents in a multi-agent system. In multi-agent system, agents perceive environmental information and act based on this information. Therefore, in case that the perceptual information contains some noise, a cooperative behavior of agents is more challenging and the resulting fitness of the agents is inferior. In order to develop a behavior of the agents that is robust to the perception noise, we evolved the behavior of the agents in noisy environment. As a result, the evolved behavior, obtained in a noisy environment is superior (in terms of robustness) than that evolved in noiseless environment.


human-agent interaction | 2016

Alignment Approach Comparison between Implicit and Explicit Suggestions in Object Reference Conversations

Mitsuhiko Kimoto; Takamasa Iio; Masahiro Shiomi; Ivan Tanev; Katsunori Shimohara; Norihiro Hagita

The recognition of an indicated object by an interacting person is an essential function for a robot that acts in daily environments. To improve recognition accuracy, clarifying the goal of the indicating behaviors is needed. For this purpose, we experimentally compared two kinds of interaction strategies: a robot that explicitly provides instructions to people about how to refer to objects or a robot that implicitly aligns with the peoples indicating behaviors. Even though our results showed that participants evaluated the implicit approach to be more natural than the explicit approach, the recognition performances of the two approaches were not significantly different.

Collaboration


Dive into the Takamasa Iio's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge