Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tadahiro Taniguchi is active.

Publication


Featured researches published by Tadahiro Taniguchi.


ieee intelligent vehicles symposium | 2012

Semiotic prediction of driving behavior using unsupervised double articulation analyzer

Tadahiro Taniguchi; Shogo Nagasaka; Kentarou Hitomi; Naiwala P. Chandrasiri; Takashi Bando

In this paper, we propose a novel semiotic prediction method for driving behavior based on double articulation structure. It has been reported that predicting driving behavior from its multivariate time series behavior data by using machine learning methods, e.g., hybrid dynamical system, hidden Markov model and Gaussian mixture model, is difficult because a drivers behavior is affected by various contextual information. To overcome this problem, we assume that contextual information has a double articulation structure and develop a novel semiotic prediction method by extending nonparametric Bayesian unsupervised morphological analyzer. Effectiveness of our prediction method was evaluated using synthetic data and real driving data. In these experiments, the proposed method achieved long-term prediction 2-6 times longer than some conventional methods.


ieee/sice international symposium on system integration | 2011

Double articulation analyzer for unsegmented human motion using Pitman-Yor language model and infinite hidden Markov model

Tadahiro Taniguchi; Shogo Nagasaka

We propose an unsupervised double articulation analyzer for human motion data. Double articulation is a two-layered hierarchical structure underlying in natural language, human motion and other natural data produced by human. A double articulation analyzer estimates the hidden structure of observed data by segmenting and chunking target data. We develop a double articulation analyzer by using a sticky hierarchical Dirichlet process HMM (sticky HDP-HMM), a nonparametric Bayesian model, and an unsupervised morphological analysis based on nested Pitman-Yor language model which can chunk given documents without any dictionaries. We conducted an experiment to evaluate this method. The proposed method could extract unit motions from unsegmented human motion data by analyzing hidden double articulation structure.


Advanced Robotics | 2016

Symbol emergence in robotics: a survey

Tadahiro Taniguchi; Takayuki Nagai; Tomoaki Nakamura; Naoto Iwahashi; Tetsuya Ogata; Hideki Asoh

Humans can learn a language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form symbol systems and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted regarding the construction of robotic systems and machine learning methods that can learn a language through embodied multimodal interaction with their environment and other systems. Understanding human?-social interactions and developing a robot that can smoothly communicate with human users in the long term require an understanding of the dynamics of symbol systems. The embodied cognition and social interaction of participants gradually alter a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER represents a constructive approach towards a symbol emergence system. The symbol emergence system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e. humans and developmental robots. In this paper, specifically, we describe some state-of-art research topics concerning SER, such as multimodal categorization, word discovery, and double articulation analysis. They enable robots to discover words and their embodied meanings from raw sensory-motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions for research in SER. Graphical Abstract


intelligent robots and systems | 2012

Online learning of concepts and words using multimodal LDA and hierarchical Pitman-Yor Language Model

Takaya Araki; Tomoaki Nakamura; Takayuki Nagai; Shogo Nagasaka; Tadahiro Taniguchi; Naoto Iwahashi

In this paper, we propose an online algorithm for multimodal categorization based on the autonomously acquired multimodal information and partial words given by human users. For multimodal concept formation, multimodal latent Dirichlet allocation (MLDA) using Gibbs sampling is extended to an online version. We introduce a particle filter, which significantly improve the performance of the online MLDA, to keep tracking good models among various models with different parameters. We also introduce an unsupervised word segmentation method based on hierarchical Pitman-Yor Language Model (HPYLM). Since the HPYLM requires no predefined lexicon, we can make the robot system that learns concepts and words in completely unsupervised manner. The proposed algorithms are implemented on a real robot and tested using real everyday objects to show the validity of the proposed system.


ieee intelligent vehicles symposium | 2013

Unsupervised drive topic finding from driving behavioral data

Takashi Bando; Kazuhito Takenaka; Shogo Nagasaka; Tadahiro Taniguchi

Continuous driving-behavioral data can be converted automatically into sequences of “drive topics” in natural language; for example, “gas pedal operating,” “high-speed cruise,” then “stopping and standing still with brakes on.” In regard to developing advanced driver-assistance systems (ADASs), various methods for recognizing driver behavior have been proposed. Most of these methods employ a supervised approach based on human tags. Unfortunately, preparing complete annotations is practically impossible with massive driving-behavioral data because of the great variety of driving scenes. To overcome that difficulty, in this study, a double articulation analyzer (DAA) is used to segment continuous driving-behavioral data into sequences of discrete driving scenes. Thereafter, latent Dirichlet allocation (LDA) is used for clustering the driving scenes into a small number of so-called “drive topics” according to emergence frequency of physical features observed in the scenes. Because both DAA and LDA are unsupervised methods, they achieve data-driven scene segmentation and drive topic estimation without human tags. Labels of the extracted drive topics are also determined automatically by using distributions of the physical behavioral features included in each drive topic. The proposed framework therefore translates the output of sensors monitoring the driver and the driving environment into natural language. Efficiency of proposed method is evaluated by using a massive data set of driving behavior, including 90 drives for more than 78 hours over 3700km in total.


intelligent robots and systems | 2012

Contextual scene segmentation of driving behavior based on double articulation analyzer

Kazuhito Takenaka; Takashi Bando; Shogo Nagasaka; Tadahiro Taniguchi; Kentarou Hitomi

Various advanced driver assistance systems (ADASs) have recently been developed, such as Adaptive Cruise Control and Precrash Safety System. However, most ADASs can operate in only some driving situations because of the difficulty of recognizing contextual information. For closer cooperation between a driver and vehicle, the vehicle should recognize a wider range of situations, similar to that recognized by the driver, and assist the driver with appropriate timing. In this paper, we assumed a double articulation structure in driving behavior data and segmented driving behavior into meaningful chunks for driving scene recognition in a similar manner to natural language processing (NLP). A double articulation analyzer translated the driving behavior into meaningless manemes, which are the smallest units of the driving behavior just like phonemes in NLP, and from them it constructed navemes, which are meaningful chunks of driving behavior just like morphemes. As a result of this two-phase analysis, we found that driving chunks equivalent to language words were closer to the complicated or contextual driving scene segmentation produced by human recognition.


intelligent robots and systems | 2014

Mutual learning of an object concept and language model based on MLDA and NPYLM

Tomoaki Nakamura; Takayuki Nagai; Kotaro Funakoshi; Shogo Nagasaka; Tadahiro Taniguchi; Naoto Iwahashi

Humans develop their concept of an object by classifying it into a category, and acquire language by interacting with others at the same time. Thus, the meaning of a word can be learnt by connecting the recognized word and concept. We consider such an ability to be important in allowing robots to flexibly develop their knowledge of language and concepts. Accordingly, we propose a method that enables robots to acquire such knowledge. The object concept is formed by classifying multimodal information acquired from objects, and the language model is acquired from human speech describing object features. We propose a stochastic model of language and concepts, and knowledge is learnt by estimating the model parameters. The important point is that language and concepts are interdependent. There is a high probability that the same words will be uttered to objects in the same category. Similarly, objects to which the same words are uttered are highly likely to have the same features. Using this relation, the accuracy of both speech recognition and object classification can be improved by the proposed method. However, it is difficult to directly estimate the parameters of the proposed model, because there are many parameters that are required. Therefore, we approximate the proposed model, and estimate its parameters using a nested Pitman-Yor language model and multimodal latent Dirichlet allocation to acquire the language and concept, respectively.


intelligent vehicles symposium | 2014

Visualization of driving behavior using deep sparse autoencoder

Hailong Liu; Tadahiro Taniguchi; Toshiaki Takano; Yusuke Tanaka; Kazuhito Takenaka; Takashi Bando

Driving behavioral data is too high-dimensional for people to review their driving behavior. It includes accelerator opening rate, steering angle, brake Master-Cylinder pressure and other various information. The high-dimensional data is not very intuitive for drivers to understand their driving behavior when they take a look back on their recorded driving behavior. We used a deep sparse autoencoder to extract the low-dimensional high-level representation from high-dimensional raw driving behavioral data obtained from a control area network. Based on this low-dimensional representation, we propose two visualization methods called Driving Cube and Driving Color Map. Driving Cube is a cubic representation displaying extracted three-dimensional features. Driving Color Map is a colored trajectory shown on a road map representing the extracted features. The trajectory is colored using the RGB color space, which corresponds to the extracted three-dimensional features. To evaluate the proposed method for extracting low-dimensional feature, we conducted an experiment and found several differences with recorded driving behavior by viewing the visualized Driving Color Map and that our visualization methods can help people to recognize different driving behavior. To evaluate the effectiveness of low-dimensional representation, we compared deep sparse autoencoder with other conventional methods from the viewpoint of linear separability of elemental driving behavior. As a result, our methods outperformed other conventional methods.


IEEE Transactions on Intelligent Transportation Systems | 2017

Visualization of Driving Behavior Based on Hidden Feature Extraction by Using Deep Learning

Hailong Liu; Tadahiro Taniguchi; Yusuke Tanaka; Kazuhito Takenaka; Takashi Bando

In this paper, we propose a visualization method for driving behavior that helps people to recognize distinctive driving behavior patterns in continuous driving behavior data. Driving behavior can be measured using various types of sensors connected to a control area network. The measured multi-dimensional time series data are called driving behavior data. In many cases, each dimension of the time series data is not independent of each other in a statistical sense. For example, accelerator opening rate and longitudinal acceleration are mutually dependent. We hypothesize that only a small number of hidden features that are essential for driving behavior are generating the multivariate driving behavior data. Thus, extracting essential hidden features from measured redundant driving behavior data is a problem to be solved to develop an effective visualization method for driving behavior. In this paper, we propose using deep sparse autoencoder (DSAE) to extract hidden features for visualization of driving behavior. Based on the DSAE, we propose a visualization method called a driving color map by mapping the extracted 3-D hidden feature to the red green blue (RGB) color space. A driving color map is produced by placing the colors in the corresponding positions on the map. The subjective experiment shows that feature extraction method based on the DSAE is effective for visualization. In addition, its performance is also evaluated numerically by using pattern recognition method. We also provide examples of applications that use driving color maps in practical problems. In summary, it is shown the driving color map based on DSAE facilitates better visualization of driving behavior.


IEEE Transactions on Intelligent Transportation Systems | 2015

Unsupervised Hierarchical Modeling of Driving Behavior and Prediction of Contextual Changing Points

Tadahiro Taniguchi; Shogo Nagasaka; Kentarou Hitomi; Kazuhito Takenaka; Takashi Bando

An unsupervised learning method, called double articulation analyzer with temporal prediction (DAA-TP), is proposed on the basis of the original DAA model. The method will enable future advanced driving assistance systems to determine driving context and predict possible scenarios of driving behavior by segmenting and modeling incoming driving-behavior time series data. In previous studies, we applied the DAA model to driving-behavior data and argued that contextual changing points can be estimated as changing points of chunks. A sequence prediction method, which predicts the next hidden state sequence, was also proposed in a previous study. However, the original DAA model does not model the duration of chunks of driving behavior and is not able to do a temporal prediction of the scenarios. Our DAA-TP method explicitly models the duration of chunks of driving behavior on the assumption that driving-behavior data have a two-layered hierarchical structure, i.e., double articulation structure. For this purpose, the hierarchical Dirichlet process hidden semi-Markov model is used for explicitly modeling the duration of segments of driving-behavior data. A Poisson distribution is also used to model the duration distribution of driving-behavior segments. The duration distribution of chunks of driving-behavior data is also theoretically calculated using the reproductive property of the Poisson distribution. We also propose a calculation method for obtaining the probability distribution of the remaining duration of current driving words as a mixture of Poisson distribution with a theoretical approximation for unobserved driving words. This method can calculate the posterior probability distribution of the next termination time of chunks by explicitly modeling all probable chunking results for observed data. The DAA-TP was applied to a synthetic data set having a double articulation structure to evaluate its model consistency. To evaluate the effectiveness of DAA-TP, we applied it to a driving-behavior data set recorded at actual factory circuits. The DAA-TP could predict the next termination time of chunks more accurately than the compared methods. We also report the qualitative results for understanding the potential capability of DAA-TP.

Collaboration


Dive into the Tadahiro Taniguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naoto Iwahashi

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiro Yano

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hailong Liu

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Tetsunari Inamura

National Institute of Informatics

View shared research outputs
Researchain Logo
Decentralizing Knowledge