Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iwaki Toshima is active.

Publication


Featured researches published by Iwaki Toshima.


The International Journal of Robotics Research | 2004

Embodied Symbol Emergence Based on Mimesis Theory

Tetsunari Inamura; Iwaki Toshima; Hiroaki Tanie; Yoshihiko Nakamura

“Mimesis” theory focused in the cognitive science field and “mirror neurons” found in the biology field show that the behavior generation process is not independent of the behavior cognition process. The generation and cognition processes have a close relationship with each other. During the behavioral imitation period, a human being does not practice simple joint coordinate transformation, but will acknowledge the parents’ behavior. It understands the behavior after abstraction as symbols, and will generate its self-behavior. Focusing on these facts, we propose a new method which carries out the behavior cognition and behavior generation processes at the same time. We also propose a mathematical model based on hidden Markov models in order to integrate four abilities: (1) symbol emergence; (2) behavior recognition; (3) self-behavior generation; (4) acquiring the motion primitives. Finally, the feasibility of this method is shown through several experiments on a humanoid robot.


international symposium on experimental robotics | 2003

Acquiring Motion Elements for Bidirectional Computation of Motion Recognition and Generation

Tetsunari Inamura; Iwaki Toshima; Yoshihiko Nakamura

Mimesis theory is one of the primitive skill of imitative learning, which is regarded as an origin of human intelligence because imitation is fundamental function for communication and symbol manipulation. When the mimesis is adopted as learning method for humanoids, convenience for designing full body behavior would decrease because bottom-up learning approaches from robot side and top-down teaching approaches from user side involved each other. Therefore, we propose a behavior acquisition and understanding system for humanoids based on the mimesis theory. This system is able to abstract observed others’ behaviors into symbols, to recognize others’ behavior using the symbols, and to generate motion patterns using the symbols. In this paper, we mention the integration of mimesis loop, and confirmation of the feasibility on virtual humanoids.


international conference on robotics and automation | 2002

Acquisition and embodiment of motion elements in closed mimesis loop

Tetsunari Inamura; Iwaki Toshima; Yoshihiko Nakamura

It is needed for humanoid to acquire not only just a trajectory but also aim of the behavior and symbolic information during behavior development. We (2001) have proposed the mimesis system as a framework of synchronous learning model for behavior acquisition and symbol emergence. However, the motion elements which are fundamental representation of behavior have stood on the unsuitable assumption that they are given without taking the robot embodiment and dynamics into consideration. In this paper, the design theory of motion elements with consideration of the embodiment are shown, and novel methods of realization of the mimesis for real humanoids is proposed.


Presence: Teleoperators & Virtual Environments | 2008

Sound localization using an acoustical telepresence robot: Telehead ii

Iwaki Toshima; Shigeaki Aoki; Tatsuya Hirahara

TeleHead I is an acoustical telepresence robot that we built on the basis of the concept that remote sound localization could be best achieved by using a user-like dummy head whose movement synchronizes with the users head movement in real time. We clarified the characteristics of the latest version of TeleHead I, TeleHead II, and verified the validity of this concept by sound localization experiments. TeleHead II can synchronize stably with the users head movement with a 120-ms delay. The driving noise level measured through headphones is below 24 dB SPL from 1 to 4 kHz. The shape difference between the dummy head and the user is about 3 in head width and 5 in head length. An overall measurement metric indicated that the difference between the head-related transfer functions (HRTFs) of the dummy head and the modeled listener is about 5 dB. The results of the sound localization experiments using TeleHead II clarified that head movement improves horizontal-plane sound localization performance even when the dummy head shape differs from the users head shape. In contrast, the results for head movement when the dummy head shape and user head shape are different were inconsistent in the median plane. The accuracy of sound localization when using the same-shape dummy head with movement tethered to the users head movement was always good. These results show that the TeleHead concept is acceptable for building an acoustical telepresence robot. They also show that the physical characteristics of TeleHead II are sufficient for conducting sound localization experiments.


intelligent robots and systems | 2006

The effect of head movement on sound localization in an acoustical telepresence robot: TeleHead

Iwaki Toshima; Shigeaki Aoki

Using our developed acoustical telepresence robot, TeleHead, we have so far confirmed that not only stationary binaural features but also dynamic cues due to head movement play important roles in auditory localization. In this study, aiming towards the realization of an ideal acoustical telepresence robot, we clarified the relation between the speed of head movement and the accuracy of auditory localization in listening experiments. We examined the effect of two head-movement factors on sound localization accuracy: observation from multiple postures and the obtainment of dynamical information during head movement. The results suggest that both factors improve the accuracy of sound localization in experiments. Moreover, even when we can use only one of these factors, the accuracy of sound localization is almost the same as the subjects original accuracy. The results confirm that even under very bad communication, control, and head shape conditions, it may be possible to get a system to actually work. They also mean that there is a possibility of building an acoustical telepresence robot with a dummy head of a general shape. This is meaningful from the viewpoint of engineering. In addition, it suggests the strong robustness of the human auditory sound localization function


Advanced Robotics | 2009

Sound Localization During Head Movement Using an Acoustical Telepresence Robot: TeleHead

Iwaki Toshima; Shigeaki Aoki

Using our developed acoustical telepresence robot, TeleHead, we have so far confirmed that not only stationary binaural features, but also dynamic cues from head movement play important roles in sound localization. In this study, aiming towards the realization of an ideal acoustical telepresence robot, we clarify the relation between the head movement and the accuracy of sound localization in sound localization experiments. We examined two factors related to head movement that should have an impact on sound localization accuracy: observation from multiple postures and dynamic information during head movement. The results suggest that both factors improve the accuracy of sound localization in experiments. Moreover, even when we can use only one of these factors, the accuracy of sound localization is almost the same as the subjects original accuracy. The results confirm that even under very bad communication, control and head-shape conditions, the synchronization of head movement is important for building an acoustical telepresence robot. They also point to the possibility of building an acoustical telepresence robot with a dummy head of a general shape. This is meaningful from the viewpoint of engineering. In addition, it suggests the strong robustness of the human sound localization function.


international conference on robotics and automation | 2005

Effect of driving delay with an acoustical tele-presence robot, TeleHead

Iwaki Toshima; Shigeaki Aoki

TeleHead is an acoustical tele-presence robot that has a user-like dummy head whose movement synchronizes with the listener’s head movement in real time. Here, we summarize the concept of TeleHead and suggest a learning effect for driving delay. The results show that even if there is 1-s delay in TeleHead’s movement, synchronization with the listener’s head movement is still effective. In addition, a user can adapt to even over 1-s delay for transmission without any feedback. This suggests a strong adaptability for delay, which can not be avoided in using tele-operated robots such as TeleHead.


Advanced Robotics | 2014

Perception of delay time of head movement in using an acoustical telepresence robot: TeleHead

Iwaki Toshima; Shigeaki Aoki

We built an acoustical telepresence robot, TeleHead, which has a user-like dummy head and is synchronized with the user’s head movement in real time. We are trying to clarify the effects of reproducing head movement. In this paper, we evaluate the sense of incongruity induced by the delay time in reproducing head movement. The results indicate that whether users feel the delay or not depends on their spatial perception. Therefore, acceptable delay time can be calculated from the user’s localization accuracy and head movement speed. Even under the strictest condition, i.e. high-speed head movements, using white noise makes it easier to localize sounds and roughly about 40-ms delay may be acceptable. Moreover, in conversational situations, 80-ms delay is acceptable. Graphical Abstract


international conference on robotics and automation | 2007

Quantitative evaluation of delay time of head movement for an acoustical telepresence robot: TeleHead

Iwaki Toshima; Shigeaki Aoki

We built an acoustical telepresence robot, TeleHead, which has a user-like dummy head and is synchronized with the users head movement in real time. We are trying to clarify the effects of reproducing head movement. In this paper, we evaluated the sense of incongruity induced by the delay time in reproducing head movement. The results indicate that head movement control should have a dead time shorter than 27 ms. In addition, this dead time does not depend on a head shape of an acoustical telepresence robot in terms of guidelines for building an acoustical telepresence robot. The results also suggest that the cue for the discrimination of delay is not the delay time itself. They suggest that subjects might discriminate the difference between the perception of auditory sound localization and somatosensory perception of their head posture.


international conference on robotics and automation | 2001

Imitation and primitive symbol acquisition of humanoids by the integrated mimesis loop

Tetsunari Inamura; Yoshihiko Nakamura; Hideaki Ezaki; Iwaki Toshima

Collaboration


Dive into the Iwaki Toshima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatsuya Hirahara

Toyama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Hideaki Ezaki

Kawasaki Heavy Industries

View shared research outputs
Top Co-Authors

Avatar

Makoto Otani

Toyama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroyuki Sagara

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Makio Kashino

Nippon Telegraph and Telephone

View shared research outputs
Researchain Logo
Decentralizing Knowledge