Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshinobu Hagiwara is active.

Publication


Featured researches published by Yoshinobu Hagiwara.


conference of the industrial electronics society | 2015

Statistical localization exploiting convolutional neural network for an autonomous vehicle

Satoshi Ishibushi; Akira Taniguchi; Toshiaki Takano; Yoshinobu Hagiwara; Tadahiro Taniguchi

In this paper, we propose a self-localization method that exploits object recognition results by using convolutional neural networks (CNNs) for autonomous vehicles. Monte-Carlo localization (MCL) is one of the most popular localization methods that use odometry and distance sensor data for determining vehicle position. Some errors are often observed in the localization tasks and MCL often suffers from global positional errors. A global positional error means that particles representing a vehicles position are distributed in the form of a multimodal distribution, i.e., the distribution has several peaks. To overcome this problem, we propose a method in which an autonomous vehicle employs object recognition results, obtained using CNNs, as the measurement data with a Bag-of-Features representation in an integrative manner. The semantic information found in the recognition results obtained using the CNN reduces the global errors in localization. The experimental results show that the proposed method can converge the distribution of the vehicle positions and particle orientations and reduce the global positional errors.


Advanced Robotics | 2017

Bayesian body schema estimation using tactile information obtained through coordinated random movements

Tomohiro Mimura; Yoshinobu Hagiwara; Tadahiro Taniguchi; Tetsunari Inamura

Graphical Abstract This paper describes a computational model, called the Dirichlet process Gaussian mixture model with latent joints (DPGMM-LJ), that can find latent tree structure embedded in data distribution in an unsupervised manner. By combining DPGMM-LJ and a preexisting body map formation method, we propose a method that enables an agent having multilink body structure to discover its kinematic structure, i.e. body schema, from tactile information alone. The DPGMM-LJ is a probabilistic model based on Bayesian nonparametrics and an extension of Dirichlet process Gaussian mixture model (DPGMM). In a simulation experiment, we used a simple fetus model that had five body parts and performed structured random movements in a womb-like environment. It was shown that the method could estimate the number of body parts and kinematic structures without any preexisting knowledge in many cases. Another experiment showed that the degree of motor coordination in random movements affects the result of body schema formation strongly. It is confirmed that the accuracy rate for body schema estimation had the highest value 84.6% when the ratio of motor coordination was 0.9 in our setting. These results suggest that kinematic structure can be estimated from tactile information obtained by a fetus moving randomly in a womb without any visual information, even though its accuracy was not so high. They also suggest that a certain degree of motor coordination in random movements and the sufficient dimension of state space that represents the body map are important to estimate body schema correctly.


ieee global conference on consumer electronics | 2015

Cloud based VR system with immersive interfaces to collect multimodal data in human-robot interaction

Yoshinobu Hagiwara

This paper presents a cloud based VR system with immersive interfaces to collect multimodal data in human-robot interaction and its applications. The proposed system enables a subject to log in to the VR space as an avatar and to naturally interact with a virtual robot by immersive interfaces. A head mounted display and a motion capture device provide immersive visualization and natural motion control in the VR system, respectively. The proposed VR system can simultaneously perform natural human-robot interaction in a VR space and collect visual, physical, and voice data during human-robot interaction by the immersive interfaces. Two application experiments to learn objects attributes and to learn communication protocol demonstrate the availability of the proposed system.


international conference on social robotics | 2017

Learning Relationships Between Objects and Places by Multimodal Spatial Concept with Bag of Objects

Shota Isobe; Akira Taniguchi; Yoshinobu Hagiwara; Tadahiro Taniguchi

Human support robots need to learn the relationships between objects and places to provide services such as cleaning rooms and locating objects through linguistic communications. In this paper, we propose a Bayesian probabilistic model that can automatically model and estimate the probability of objects existing in each place using a multimodal spatial concept based on the co-occurrence of objects. In our experiments, we evaluated the estimation results for objects by using a word to express their places. Furthermore, we showed that the robot could perform tasks involving cleaning up objects, as an example of the usage of the method. We showed that the robot correctly learned the relationships between objects and places.


human robot interaction | 2017

Learning from Human Collaborative Experience: Robot Learning via Crowdsourcing of Human-Robot Interaction

Jeffrey Too Chuan Tan; Yoshinobu Hagiwara; Tetsunari Inamura

Human-robot collaboration is a potential yet challenging robot development due to the vase diversity of partner human behaviors for the robot to adapt. In this work, we develop a robot learning framework that can learn by data-driven approach, where collaboration data is collected through crowdsourcing of human-robot interaction. We propose the addition of a formal definition incorporating partners behaviors and set of state features for work conditions related to the collaboration task into the learning policy. A collaborative table setting task experiment scenario was developed with the capability to perform cloud based human-robot interaction for crowdsourcing data gathering. The human-human collaboration experiments were conducted to gather collaboration interaction data to build the case based planning libraries and finally, evaluation experiments were conducted and concluded the effectiveness of the proposed learning approach.


international conference on control automation and systems | 2015

Accuracy evaluation of camera-based position and heading measurement system for vessel positioning at a very close distance

Yoshiaki Mizuchi; Tadashi Ogura; Young-Bok Kim; Yoshinobu Hagiwara; Yongwoon Choi

In this study, we propose a measurement system consists of two pairs of multiple cameras mounted on a pan-tilt unit and a landmark installed on a target side. The purpose of the proposed system is to measure relative position and heading angle of a vessel to a target at a close distance, in order to reduce collision risks by automation of vessel positioning that requires high measurement accuracy and rate. To achieve such measurement, cameras that have wide sensing range and high angular resolution are used instead of GPS, radars, or laser sensors. The proposed system also has the ability of accurate and fast measurement of the distance and direction angle to a target by applying a simple and robust target detection method. The position and heading of a vessel are determined from two pairs of the distance and direction angle. To evaluate the measurement accuracy of the proposed system, we measured position and heading of the system relative to landmarks on several conditions including positional displacement and rotation. The experimental results demonstrate that the proposed system can be used for automation of vessel positioning at a close distance.


human robot interaction | 2018

Representation of Embodied Collaborative Behaviors in Cyber-Physical Human-Robot Interaction with Immersive User Interfaces

Jeffrey Too Chuan Tan; Yoshiaki Mizuchi; Yoshinobu Hagiwara; Tetsunari Inamura

Collaborative robots require high intelligence in order to adapt to the widely diversified human partners behaviors. We have proposed an interactive robot learning of collaborative actions from large datasets of human-robot interactions. This work aims to further our development to incorporate embodied behaviors in human-robot collaboration into the robot learning approach. In this preliminary work, we develop immersive user interfaces with virtual reality devices for the cyber-physical interaction between human and robot, in order to represent embodied collaborative behaviors in our experiment system. We have obtained the humans visual observation in the virtual world by tracking the movement of the head mounted device to determine the observed target, the verbal communication between agents from the spoken speech, and the agents actions, i.e. the body movements from tracking the avatars location in the virtual world to determine the traveled path.


Frontiers in Neurorobotics | 2018

Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

Yoshinobu Hagiwara; Masakazu Inoue; Hiroyoshi Kobayashi; Tadahiro Taniguchi

In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.


international symposium on micro-nanomechatronics and human science | 2016

Analysis of slow dynamics of kinematic structure estimation after physical disorder: Constructive approach toward phantom limb pain

Tomohiro Mimura; Yoshinobu Hagiwara; Tadahiro Taniguchi; Tetsunari Inamura; Shiro Yano

A recent cutting-edge research line in medical field is phantom limb pain, which is caused by slow dynamics in human kinematic-structure with physical disorder. In this study, we analyze the mechanism of phantom limb pain using a computational model that estimates a body link-structure using a non-parametric Bayes approach. The computational model, which is called Dirichlet process Gaussian mixture model with latent joints (DPGMM-LJ), simultaneously estimates a body map and a body link-structure using only tactile information obtained in its body. To analyze the slow dynamics of phantom limb pain, we performed the experiment to confirm the changes of the body map and the body link-structure estimated by DPGMM-LJ. The experiment was performed in a virtual environment using a simple body model while its body part was lost. As tactile information at a lost body part, we set four experimental conditions: the information becomes zero, the information becomes white noise. The slow dynamics of phantom limb pain is discussed by the experimental result.


international conference on intelligent autonomous systems | 2016

Simultaneous Localization, Mapping and Self-body Shape Estimation by a Mobile Robot

Akira Taniguchi; Lv WanPeng; Tadahiro Taniguchi; Toshiaki Takano; Yoshinobu Hagiwara; Shiro Yano

This paper describes a new method for estimating the body shape of a mobile robot by using sensory-motor information. In many biological systems, it is important to be able to estimate body shapes to allow it to appropriately behave in a complex environment. Humans and other animals can form their body image and determine actions based on their recognized body shape. However, conventional mobile robots have not had the ability to estimate body shape, and instead, developers have provided body shape information to the robots. In this paper, we describe a new method that enables a robot to obtain only subjective information, e.g., motor commands and distance sensor information, and automatically estimate its self-body shape. We call the method simultaneous localization, mapping, and self-body shape estimation (SLAM-SBE). The method is based on Bayesian statistics. In particular, the method is obtained by extending the simultaneous localization and mapping (SLAM) method. Experimental results show that a mobile robot can obtain a self-body shape image represented by an occupancy grid by using only its sensory-motor information (i.e., without any objective measurement of its body).

Collaboration


Dive into the Yoshinobu Hagiwara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tetsunari Inamura

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tadashi Ogura

Soka University of America

View shared research outputs
Top Co-Authors

Avatar

Yongwoon Choi

Soka University of America

View shared research outputs
Top Co-Authors

Avatar

Young-Bok Kim

Pukyong National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiro Yano

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge