Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takahiro Miyashita is active.

Publication


Featured researches published by Takahiro Miyashita.


IEEE Transactions on Robotics | 2008

Analysis of Humanoid Appearances in Human–Robot Interaction

Takayuki Kanda; Takahiro Miyashita; Taku Osada; Yuji Haikawa; Hiroshi Ishiguro

Identifying the extent to which the appearance of a humanoid robot affects human behavior toward it is important. We compared participant impressions of and behaviors toward two real humanoid robots in simple human-robot interactions. These two robots, which have different appearances but are controlled to perform the same recorded utterances and motions, are adjusted by a motion-capturing system. We conducted an experiment with 48 human participants who individually interacted with the two robots and also with a human for reference. The results revealed that different appearances did not affect participant verbal behaviors, but they did affect such nonverbal behaviors as distance and delay of response. These differences are explained by two factors: impressions and attributions.


international conference on robotics and automation | 2004

Navigation for human-robot interaction tasks

Philipp Althaus; Hiroshi Ishiguro; Takayuki Kanda; Takahiro Miyashita; Henrik I. Christensen

One major design goal in human-robot interaction is that the robots behave in an intelligent manner, preferably in a similar way as humans. This constraint must also be taken into consideration when the navigation system for the platform is developed. However, research in human-robot interaction is often restricted to other components of the system including gestures, manipulation, and speech. On the other hand, research for mobile robot navigation focuses primarily on the task of reaching a certain goal point in an environment. We believe that these two problems can not be treated separately for a personal robot that coexists with humans in the same surrounding. Persons move constantly while they are interacting with each other. Hence, also a robot should do that, which poses constraints on the navigation system. This type of navigation is the focus of this paper. Methods have been developed for a robot to join a group of people engaged in a conversation. Preliminary results show that the platforms moving patterns are very similar to the ones of the persons. Moreover, this dynamic interaction has been judged naturally by the test subjects, which greatly increases the perceived intelligence of the robot.


intelligent robots and systems | 2007

Laser tracking of human body motion using adaptive shape modeling

Dylan F. Glas; Takahiro Miyashita; Hiroshi Ishiguro; Norihiro Hagita

In this paper we present a method for determining body orientation and pose information from laser scanner data using particle filtering with an adaptive modeling algorithm. A parametric human shape model is recursively updated to fit observed data after each resampling step of the particle filter. This updated model is then used in the likelihood estimation step for the following iteration. This method has been implemented and tested by using a network of laser range finders to observe human subjects in a variety of interactions. We present results illustrating that our method can closely track torso and arm movements even with noisy and incomplete sensor data, and we show examples of body language primitives that can be observed from this orientation and positioning information.


IEEE Transactions on Human-Machine Systems | 2013

Person Tracking in Large Public Spaces Using 3-D Range Sensors

Drazen Brscic; Takayuki Kanda; Tetsushi Ikeda; Takahiro Miyashita

A method for tracking the position, orientation, and height of persons in large public environments is presented. Such a piece of information is known to be useful both for understanding their actions, as well as for applications such as human-robot interaction. We use multiple 3-D range sensors, which are mounted above human height to have less occlusion between persons. A computationally simple-tracking method is proposed that works on single sensor data and combines multiple sensors so that large areas can be covered with a minimum number of sensors. Moreover, it can work with different sensor types and is robust to the imperfect sensor measurements; therefore, it is possible to combine currently available 3-D range sensor solutions to achieve tracking in wide public spaces. The method was implemented in a shopping center environment, and it was shown that good tracking performance can be achieved.


Advanced Robotics | 2009

Laser-Based Tracking of Human Position and Orientation Using Parametric Shape Modeling

Dylan F. Glas; Takahiro Miyashita; Hiroshi Ishiguro; Norihiro Hagita

Robots designed to interact socially with people require reliable estimates of human position and motion. Additional pose data such as body orientation may enable a robot to interact more effectively by providing a basis for inferring contextual social information such as peoples intentions and relationships. To this end, we have developed a system for simultaneously tracking the position and body orientation of many people, using a network of laser range finders mounted at torso height. An individual particle filter is used to track the position and velocity of each human, and a parametric shape model representing the persons cross-sectional contour is fit to the observed data at each step. We demonstrate the systems tracking accuracy quantitatively in laboratory trials and we present results from a field experiment observing subjects walking through the lobby of a building. The results show that our method can closely track torso and arm movements, even with noisy and incomplete sensor data, and we present examples of social information observable from this orientation and positioning information that may be useful for social robots.


ISRR | 2007

Haptic Communication Between Humans and Robots

Takahiro Miyashita; Taichi Tajika; Hiroshi Ishiguro; Kiyoshi Kogure; Norihiro Hagita

This paper introduces the haptic communication robots we developed and proposes a method for detecting human positions and postures based on haptic interaction between humanoid robots and humans. We have developed two types of humanoid robots that have tactile sensors embedded in a soft skin that covers the robot’s entire body as tools for studying haptic communication. Tactile sensation could be used to detect a communication partner’s position and posture even if the vision sensor did not observe the person. In the proposed method, the robot obtains a map that statistically describes relationships between its tactile information and human positions/postures from the records of haptic interaction taken by tactile sensors and a motion capturing system during communication. The robot can then estimate its communication partner’s position/posture based on the tactile sensor outputs and the map. To verify the method’s performance, we implemented it in the haptic communication robot. Results of experiments show that the robot can estimate a communication partner’s position/posture statistically.


Robotics and Autonomous Systems | 2004

Human-like natural behavior generation based on involuntary motions for humanoid robots

Takahiro Miyashita; Hiroshi Ishiguro

Abstract Human behaviors consist of both voluntary and involuntary motions. Almost all behaviors of task-oriented robots, however, consist solely of voluntary motions. Involuntary motions are important for generating natural motions like those of humans. Thus, we propose a natural behavior generation method for humanoid robots that is a hybrid generation between voluntary and involuntary motions. The key idea of our method is to control robots with a hybrid controller that combines the functions of a communication behavior controller and body balancing controllers. We also develop a wheeled inverted pendulum type of humanoid robot, named “Robovie-III”, in order to generate involuntary motions like oscillation. By applying our method to this robot and conducting preliminary experiments, we verify its validity. Experimental results show that the robot generates both voluntary and involuntary motions.


ieee/sice international symposium on system integration | 2010

Simulator platform that enables social interaction simulation — SIGVerse: SocioIntelliGenesis simulator

Tetsunari Inamura; Tomohiro Shibata; Hideaki Sena; Takashi Hashimoto; Nobuyuki Kawai; Takahiro Miyashita; Yoshiki Sakurai; Masahiro Shimizu; Mihoko Otake; Koh Hosoda; Satoshi Umeda; Kentaro Inui; Yuichiro Yoshikawa

Understanding mechanisms of intelligence of human beings and animals is one of the most important approaches to develop intelligent robot systems. Since the mechanisms of such real-life intelligent systems are so complex, physical interactions between agents and their environment and the social interactions between agents should be considered. Comprehension and knowledge in many peripheral fields such as cognitive science, developmental psychology, brain science, evolutionary biology, and robotics is also required. Discussions from an interdisciplinary aspect are very important for implementing this approach, but such collaborative research is time-consuming and labor-intensive, and it is difficult to obtain fruitful results from such research because the basis of experiments is very different in each research field. In the social science field, for example, several multi-agent simulation systems have been proposed for modeling factors such as social interactions and language evolution, whereas robotics researchers often use dynamics and sensor simulators. However, there is no integrated system that uses both physical simulations and social communication simulations. Therefore, we developed a simulator environment called SIGVerse that combines dynamics, perception, and communication simulations for synthetic approaches to research into the genesis of social intelligence. In this paper, we introduce SIGVerse, its example application and perspectives.


intelligent robots and systems | 2013

Human-comfortable navigation for an autonomous robotic wheelchair

Yoichi Morales; Nagasrikanth Kallakuri; Kazuhiro Shinozawa; Takahiro Miyashita; Norihiro Hagita

Reliable autonomous navigation is an active research topic that has drawn the attention for decades, however, human factors such as navigational comfort has not received the same level of attention. This work proposes the concept of “comfortable map” and presents a navigation approach for autonomous passenger vehicles which in top of being safe and reliable is comfortable. In our approach we first extract information from users preference related to comfort while sitting on a robotic wheelchair under different conditions in an indoor corridor environment. Human-comfort factors are integrated to a geometric map generated by SLAM framework. Then a global planner computes a safe and comfortable path which is followed by the robotic wheelchair. Finally, an evaluation with 29 participants using a fully autonomous robotic wheelchair, showed that more than 90% of them found the proposed approach more comfortable than a shortest-path state of the art approach.


intelligent robots and systems | 2005

Analysis of humanoid appearances in human-robot interaction

Takayuki Kanda; Takahiro Miyashita; Taku Osada; Yuji Haikawa; Hiroshi Ishiguro

Identifying the extent to which the appearance of a humanoid robot affects human behavior toward it is important. We compared participant impressions of and behaviors toward two real humanoid robots in simple human-robot interactions. These two robots, which have different appearances but are controlled to perform the same recorded utterances and motions, are adjusted by a motion-capturing system. We conducted an experiment with 48 human participants who individually interacted with the two robots and also with a human for reference. The results revealed that different appearances did not affect participant verbal behaviors, but they did affect such nonverbal behaviors as distance and delay of response. These differences are explained by two factors: impressions and attributions.

Collaboration


Dive into the Takahiro Miyashita's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge