Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akira Taniguchi is active.

Publication


Featured researches published by Akira Taniguchi.


conference of the industrial electronics society | 2015

Statistical localization exploiting convolutional neural network for an autonomous vehicle

Satoshi Ishibushi; Akira Taniguchi; Toshiaki Takano; Yoshinobu Hagiwara; Tadahiro Taniguchi

In this paper, we propose a self-localization method that exploits object recognition results by using convolutional neural networks (CNNs) for autonomous vehicles. Monte-Carlo localization (MCL) is one of the most popular localization methods that use odometry and distance sensor data for determining vehicle position. Some errors are often observed in the localization tasks and MCL often suffers from global positional errors. A global positional error means that particles representing a vehicles position are distributed in the form of a multimodal distribution, i.e., the distribution has several peaks. To overcome this problem, we propose a method in which an autonomous vehicle employs object recognition results, obtained using CNNs, as the measurement data with a Bag-of-Features representation in an integrative manner. The semantic information found in the recognition results obtained using the CNN reduces the global errors in localization. The experimental results show that the proposed method can converge the distribution of the vehicle positions and particle orientations and reduce the global positional errors.


IEEE Transactions on Cognitive and Developmental Systems | 2016

Spatial Concept Acquisition for a Mobile Robot That Integrates Self-Localization and Unsupervised Word Discovery From Spoken Sentences

Akira Taniguchi; Tadahiro Taniguchi; Tetsunari Inamura

In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals. We address the problem of learning novel words by a robot that has no prior knowledge of these words except for a primitive acoustic model. Furthermore, we propose a method that allows a robot to effectively use the learned words and their meanings for self-localization tasks. The proposed method is nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates the generative model for self-localization and the unsupervised word segmentation in uttered sentences via latent variables related to the spatial concept. We implemented the proposed method SpCoA on SIGVerse, which is a simulation environment, and TurtleBot2, which is a mobile robot in a real environment. Further, we conducted experiments for evaluating the performance of SpCoA. The experimental results showed that SpCoA enabled the robot to acquire the names of places from speech sentences. They also revealed that the robot could effectively utilize the acquired spatial concepts and reduce the uncertainty in self-localization.


Robotics and Autonomous Systems | 2018

Unsupervised spatial lexical acquisition by updating a language model with place clues

Akira Taniguchi; Tadahiro Taniguchi; Tetsunari Inamura

Abstract This paper describes how to achieve highly accurate unsupervised spatial lexical acquisition from speech-recognition results including phoneme recognition errors. In most research into lexical acquisition, the robot has no pre-existing lexical knowledge. The robot acquires sequences of some phonemes as words from continuous speech signals. In a previous study, we proposed a nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates the robot’s position and words obtained by unsupervised word segmentation from uncertain syllable recognition results. However, SpCoA has a very critical problem to be solved in lexical acquisition; the boundaries of word segmentation are incorrect in many cases because of many phoneme recognition errors. Therefore, we propose an unsupervised machine learning method (SpCoA++) for the robust lexical acquisition of novel words relating to places visited by the robot. The proposed SpCoA++ method performs an iterative estimation of learning spatial concepts and updating a language model using place information. SpCoA++ can select a candidate including many words that better represent places from multiple word-segmentation results by maximizing the mutual information between segmented words and spatial concepts. The experimental results demonstrate a significant improvement of the phoneme accuracy rate of learned words relating to place in the proposed method by word-segmentation results based on place information, in comparison to the conventional methods. We indicate that the proposed method enables the robot to acquire words from speech signals more accurately, and improves the estimation accuracy of the spatial concepts.


international conference on social robotics | 2017

Learning Relationships Between Objects and Places by Multimodal Spatial Concept with Bag of Objects

Shota Isobe; Akira Taniguchi; Yoshinobu Hagiwara; Tadahiro Taniguchi

Human support robots need to learn the relationships between objects and places to provide services such as cleaning rooms and locating objects through linguistic communications. In this paper, we propose a Bayesian probabilistic model that can automatically model and estimate the probability of objects existing in each place using a multimodal spatial concept based on the co-occurrence of objects. In our experiments, we evaluated the estimation results for objects by using a word to express their places. Furthermore, we showed that the robot could perform tasks involving cleaning up objects, as an example of the usage of the method. We showed that the robot correctly learned the relationships between objects and places.


Frontiers in Neurorobotics | 2017

Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

Akira Taniguchi; Tadahiro Taniguchi; Angelo Cangelosi

In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.


international conference on intelligent autonomous systems | 2016

Simultaneous Localization, Mapping and Self-body Shape Estimation by a Mobile Robot

Akira Taniguchi; Lv WanPeng; Tadahiro Taniguchi; Toshiaki Takano; Yoshinobu Hagiwara; Shiro Yano

This paper describes a new method for estimating the body shape of a mobile robot by using sensory-motor information. In many biological systems, it is important to be able to estimate body shapes to allow it to appropriately behave in a complex environment. Humans and other animals can form their body image and determine actions based on their recognized body shape. However, conventional mobile robots have not had the ability to estimate body shape, and instead, developers have provided body shape information to the robots. In this paper, we describe a new method that enables a robot to obtain only subjective information, e.g., motor commands and distance sensor information, and automatically estimate its self-body shape. We call the method simultaneous localization, mapping, and self-body shape estimation (SLAM-SBE). The method is based on Bayesian statistics. In particular, the method is obtained by extending the simultaneous localization and mapping (SLAM) method. Experimental results show that a mobile robot can obtain a self-body shape image represented by an occupancy grid by using only its sensory-motor information (i.e., without any objective measurement of its body).


IFAC-PapersOnLine | 2016

Simultaneous Estimation of Self-position and Word from Noisy Utterances and Sensory Information

Akira Taniguchi; Tadahiro Taniguchi; Tetsunari Inamura


Transactions of the Institute of Systems, Control and Information Engineers | 2014

Research on Simultaneous Estimation of Self-Location and Location Concepts

Akira Taniguchi; Haruki Yoshizaki; Tetsunari Inamura; Tadahiro Taniguchi


intelligent robots and systems | 2017

Online spatial concept and lexical acquisition with simultaneous localization and mapping

Akira Taniguchi; Yoshinobu Hagiwara; Tadahiro Taniguchi; Tetsunari Inamura


arXiv: Robotics | 2018

SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping.

Akira Taniguchi; Yoshinobu Hagiwara; Tadahiro Taniguchi; Tetsunari Inamura

Collaboration


Dive into the Akira Taniguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tetsunari Inamura

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir Aly

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Lv WanPeng

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiro Yano

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar

Shota Isobe

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge