Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuhito Takenaka is active.

Publication


Featured researches published by Kazuhito Takenaka.


ieee intelligent vehicles symposium | 2013

Unsupervised drive topic finding from driving behavioral data

Takashi Bando; Kazuhito Takenaka; Shogo Nagasaka; Tadahiro Taniguchi

Continuous driving-behavioral data can be converted automatically into sequences of “drive topics” in natural language; for example, “gas pedal operating,” “high-speed cruise,” then “stopping and standing still with brakes on.” In regard to developing advanced driver-assistance systems (ADASs), various methods for recognizing driver behavior have been proposed. Most of these methods employ a supervised approach based on human tags. Unfortunately, preparing complete annotations is practically impossible with massive driving-behavioral data because of the great variety of driving scenes. To overcome that difficulty, in this study, a double articulation analyzer (DAA) is used to segment continuous driving-behavioral data into sequences of discrete driving scenes. Thereafter, latent Dirichlet allocation (LDA) is used for clustering the driving scenes into a small number of so-called “drive topics” according to emergence frequency of physical features observed in the scenes. Because both DAA and LDA are unsupervised methods, they achieve data-driven scene segmentation and drive topic estimation without human tags. Labels of the extracted drive topics are also determined automatically by using distributions of the physical behavioral features included in each drive topic. The proposed framework therefore translates the output of sensors monitoring the driver and the driving environment into natural language. Efficiency of proposed method is evaluated by using a massive data set of driving behavior, including 90 drives for more than 78 hours over 3700km in total.


intelligent robots and systems | 2012

Contextual scene segmentation of driving behavior based on double articulation analyzer

Kazuhito Takenaka; Takashi Bando; Shogo Nagasaka; Tadahiro Taniguchi; Kentarou Hitomi

Various advanced driver assistance systems (ADASs) have recently been developed, such as Adaptive Cruise Control and Precrash Safety System. However, most ADASs can operate in only some driving situations because of the difficulty of recognizing contextual information. For closer cooperation between a driver and vehicle, the vehicle should recognize a wider range of situations, similar to that recognized by the driver, and assist the driver with appropriate timing. In this paper, we assumed a double articulation structure in driving behavior data and segmented driving behavior into meaningful chunks for driving scene recognition in a similar manner to natural language processing (NLP). A double articulation analyzer translated the driving behavior into meaningless manemes, which are the smallest units of the driving behavior just like phonemes in NLP, and from them it constructed navemes, which are meaningful chunks of driving behavior just like morphemes. As a result of this two-phase analysis, we found that driving chunks equivalent to language words were closer to the complicated or contextual driving scene segmentation produced by human recognition.


intelligent vehicles symposium | 2014

Visualization of driving behavior using deep sparse autoencoder

Hailong Liu; Tadahiro Taniguchi; Toshiaki Takano; Yusuke Tanaka; Kazuhito Takenaka; Takashi Bando

Driving behavioral data is too high-dimensional for people to review their driving behavior. It includes accelerator opening rate, steering angle, brake Master-Cylinder pressure and other various information. The high-dimensional data is not very intuitive for drivers to understand their driving behavior when they take a look back on their recorded driving behavior. We used a deep sparse autoencoder to extract the low-dimensional high-level representation from high-dimensional raw driving behavioral data obtained from a control area network. Based on this low-dimensional representation, we propose two visualization methods called Driving Cube and Driving Color Map. Driving Cube is a cubic representation displaying extracted three-dimensional features. Driving Color Map is a colored trajectory shown on a road map representing the extracted features. The trajectory is colored using the RGB color space, which corresponds to the extracted three-dimensional features. To evaluate the proposed method for extracting low-dimensional feature, we conducted an experiment and found several differences with recorded driving behavior by viewing the visualized Driving Color Map and that our visualization methods can help people to recognize different driving behavior. To evaluate the effectiveness of low-dimensional representation, we compared deep sparse autoencoder with other conventional methods from the viewpoint of linear separability of elemental driving behavior. As a result, our methods outperformed other conventional methods.


IEEE Transactions on Intelligent Transportation Systems | 2017

Visualization of Driving Behavior Based on Hidden Feature Extraction by Using Deep Learning

Hailong Liu; Tadahiro Taniguchi; Yusuke Tanaka; Kazuhito Takenaka; Takashi Bando

In this paper, we propose a visualization method for driving behavior that helps people to recognize distinctive driving behavior patterns in continuous driving behavior data. Driving behavior can be measured using various types of sensors connected to a control area network. The measured multi-dimensional time series data are called driving behavior data. In many cases, each dimension of the time series data is not independent of each other in a statistical sense. For example, accelerator opening rate and longitudinal acceleration are mutually dependent. We hypothesize that only a small number of hidden features that are essential for driving behavior are generating the multivariate driving behavior data. Thus, extracting essential hidden features from measured redundant driving behavior data is a problem to be solved to develop an effective visualization method for driving behavior. In this paper, we propose using deep sparse autoencoder (DSAE) to extract hidden features for visualization of driving behavior. Based on the DSAE, we propose a visualization method called a driving color map by mapping the extracted 3-D hidden feature to the red green blue (RGB) color space. A driving color map is produced by placing the colors in the corresponding positions on the map. The subjective experiment shows that feature extraction method based on the DSAE is effective for visualization. In addition, its performance is also evaluated numerically by using pattern recognition method. We also provide examples of applications that use driving color maps in practical problems. In summary, it is shown the driving color map based on DSAE facilitates better visualization of driving behavior.


IEEE Transactions on Intelligent Transportation Systems | 2015

Unsupervised Hierarchical Modeling of Driving Behavior and Prediction of Contextual Changing Points

Tadahiro Taniguchi; Shogo Nagasaka; Kentarou Hitomi; Kazuhito Takenaka; Takashi Bando

An unsupervised learning method, called double articulation analyzer with temporal prediction (DAA-TP), is proposed on the basis of the original DAA model. The method will enable future advanced driving assistance systems to determine driving context and predict possible scenarios of driving behavior by segmenting and modeling incoming driving-behavior time series data. In previous studies, we applied the DAA model to driving-behavior data and argued that contextual changing points can be estimated as changing points of chunks. A sequence prediction method, which predicts the next hidden state sequence, was also proposed in a previous study. However, the original DAA model does not model the duration of chunks of driving behavior and is not able to do a temporal prediction of the scenarios. Our DAA-TP method explicitly models the duration of chunks of driving behavior on the assumption that driving-behavior data have a two-layered hierarchical structure, i.e., double articulation structure. For this purpose, the hierarchical Dirichlet process hidden semi-Markov model is used for explicitly modeling the duration of segments of driving-behavior data. A Poisson distribution is also used to model the duration distribution of driving-behavior segments. The duration distribution of chunks of driving-behavior data is also theoretically calculated using the reproductive property of the Poisson distribution. We also propose a calculation method for obtaining the probability distribution of the remaining duration of current driving words as a mixture of Poisson distribution with a theoretical approximation for unobserved driving words. This method can calculate the posterior probability distribution of the next termination time of chunks by explicitly modeling all probable chunking results for observed data. The DAA-TP was applied to a synthetic data set having a double articulation structure to evaluate its model consistency. To evaluate the effectiveness of DAA-TP, we applied it to a driving-behavior data set recorded at actual factory circuits. The DAA-TP could predict the next termination time of chunks more accurately than the compared methods. We also report the qualitative results for understanding the potential capability of DAA-TP.


intelligent robots and systems | 2013

Automatic drive annotation via multimodal latent topic model

Takashi Bando; Kazuhito Takenaka; Shogo Nagasaka; Tadahiro Taniguchi

Time-series driving behavioral data and image sequences captured with car-mounted video cameras can be annotated automatically in natural language, for example, “in a traffic jam,” “leading vehicle is a truck,” or “there are three and more lanes.” Various driving support systems nowadays have been developed for safe and comfortable driving. To develop more effective driving assist systems, abstractive recognition of driving situation performed just like a human driver is important in order to achieve fully cooperative driving between the driver and vehicle. To achieve human-like annotation of driving behavioral data and image sequences, we first divided continuous driving behavioral data into discrete symbols that represent driving situations. Then, using multimodal latent Dirichlet allocation, latent driving topics laid on each driving situation were estimated as a relation model among driving behavioral data, image sequences, and human-annotated tags. Finally, automatic annotation of the behavioral data and image sequences can be achieved by calculating the predictive distribution of the annotations via estimated latent-driving topics. The proposed method intuitively annotated more than 50,000 pieces of frame data, including urban road and expressway data. The effectiveness of the estimated drive topics was also evaluated by analyzing the performances of driving-situation classification. The topics represented the drive context efficiently, i.e., the drive topics lead to a 95% lower-dimensional feature space and 6% higher accuracy compared with a high-dimensional raw-feature space. Moreover, the drive topics achieved performance almost equivalent performance texpressway datao human annotators, especially in classifying traffic jams and the number of lanes.


systems man and cybernetics | 2016

Sequence Prediction of Driving Behavior Using Double Articulation Analyzer

Tadahiro Taniguchi; Shogo Nagasaka; Kentarou Hitomi; Naiwala P. Chandrasiri; Takashi Bando; Kazuhito Takenaka

A sequence prediction method for driving behavior data is proposed in this paper. The proposed method can predict a longer latent state sequence of driving behavior data than conventional sequence prediction methods. The proposed method is derived by focusing on the double articulation structure latently embedded in driving behavior data. The double articulation structure is a two-layer hierarchical structure originally found in spoken language, i.e., a sentence is a sequence of words and a word is a sequence of letters. Analogously, we assume that driving behavior data comprise a sequence of driving words and a driving word is a sequence of driving letters. The sequence prediction method is obtained by extending a nonparametric Bayesian unsupervised morphological analyzer using a nested Pitman-Yor language model (NPYLM), which was originally proposed in the natural language processing field. This extension allows the proposed method to analyze incomplete sequences of latent states of driving behavior and to predict subsequent latent states on the basis of a maximum a posteriori criterion. The extension requires a marginalization technique over an infinite number of possible driving words. We derived such a technique on the basis of several characteristics of the NPYLM. We evaluated this proposed sequence prediction method using three types of data: 1) synthetic data; 2) data from test drives around a driving course at a factory; and 3) data from drives on a public thoroughfare. The results showed that the proposed method made better long-term predictions than did the previous methods.


intelligent vehicles symposium | 2014

Generating contextual description from driving behavioral data

Takashi Bando; Kazuhito Takenaka; Shogo Nagasaka; Tadahiro Taniguchi

This paper presents an automatic translation method from time-series driving behavior into natural language with contextual information. Nowadays, various advanced driver-assistance systems (ADASs) have been developed to reduce the number of traffic accidents and multiple ADASs are required to reduce further accidents. For such multiple ADASs, considering the context of driving and selecting appropriate assistance is key because the systems have to handle extremely complicated driving situations consisting of drivers (and their intents and maneuvers), environments (including other traffic participants such as vehicles and pedestrians), and vehicles dynamics. In this paper, time-series driving behavior is segmented into typical driving situation symbols, and the natural language expression of each situation is generated via the behavioral feature distribution observed in each situation. Owing to the symbolization of the driving behavior, the generated behavioral descriptions can be associated with their causes and results not on an actual time axis but on a situation-symbol axis as contextual descriptions, e.g., “letting up on gas pedal to pass tollgate.” The effectiveness of the proposed method was evaluated by using an actual data set of more than eight hours over a distance of 300 km in total. Although contextual expressions are very diverse even among human drivers, the proposed method obtained an agreement of more than 70%.


intelligent vehicles symposium | 2014

Prediction of Next Contextual Changing Point of Driving Behavior Using Unsupervised Bayesian Double Articulation Analyzer

Shogo Nagasaka; Tadahiro Taniguchi; Kentarou Hitomi; Kazuhito Takenaka; Takashi Bando

Future advanced driver assistance systems (ADASs) should observe a driving behavior and detect contextual changing points of driving behaviors. In this paper, we propose a novel method for predicting the next contextual changing point of driving behavior on the basis of a Bayesian double articulation analyzer. To develop the method, we extended a previously proposed semiotic predictor using an unsupervised double articulation analyzer that can extract a two-layered hierarchical structure from driving-behavior data. We employ the hierarchical Dirichlet process hidden semi-Markov model [4] to model duration time of a segment of driving behavior explicitly instead of the sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) employed in the previous model [13]. Then, to recover the hierarchical structure of contextual driving behavior as a sequence of chunks, we use the Nested Pitman-Yor Language model [6], which can extract latent words from sequences of latent letters. On the basis of the extension, we develop a method for calculating posterior probability distribution of the next contextual changing point by marginalizing potentially possible results of the chunking method and potentially successive words theoretically. To evaluate the proposed method, we applied the method to synthetic data and driving behavior data that was recorded in a real environment. The results showed that the proposed method can predict the next contextual changing point more accurately and in a longer-term manner than the compared methods: linear regression and Recurrent Neural Networks, which were trained through a supervised learning scheme.


IEEE Transactions on Intelligent Transportation Systems | 2016

Determining Utterance Timing of a Driving Agent With Double Articulation Analyzer

Tadahiro Taniguchi; Kai Furusawa; Hailong Liu; Yusuke Tanaka; Kazuhito Takenaka; Takashi Bando

In-vehicle speech-based interaction between a driver and a driving agent should be performed without affecting the driving behavior. A driving agent provides information to the driver and helps his/her driving behavior and non-driving-related tasks, e.g., selecting music and giving weather information. In this paper, we focus on a method for determining utterance timings when a driving agent provides non-driving-related information. If a driving agent provides a driver with non-driving-related information at an inappropriate moment, it will distract his/her driving behavior and deteriorate his/her safety driving. To solve or to mitigate the problem, we propose a novel method for determining the utterance timing of a driving agent on the basis of a double articulation analyzer, which is an unsupervised nonparametric Bayesian machine learning method for detecting contextual change points. To verify the effectiveness of the method, we conduct two experiments. One is an experiment on a short circuit around a park in an urban area, and the other is an experiment on a long course in a town. The results show that the proposed method enables a driving agent to avoid inappropriate timing better than baseline methods.

Collaboration


Dive into the Kazuhito Takenaka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hailong Liu

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge