Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takatsugu Hirayama is active.

Publication


Featured researches published by Takatsugu Hirayama.


analysis and modeling of faces and gestures | 2005

Facial expression representation based on timing structures in faces

Masahiro Nishiyama; Hiroaki Kawashima; Takatsugu Hirayama; Takashi Matsuyama

This paper presents a method for interpreting facial expressions based on temporal structures among partial movements in facial image sequences. To extract the structures, we propose a novel facial expression representation, which we call a facial score, similar to a musical score. The facial score enables us to describe facial expressions as spatio-temporal combinations of temporal intervals; each interval represents a simple motion pattern with the beginning and ending times of the motion. Thus, we can classify fine-grained expressions from multivariate distributions of temporal differences between the intervals in the score. In this paper, we provide a method to obtain the score automatically from input images using bottom-up clustering of dynamics. We evaluate the efficiency of facial scores by comparing the temporal structure of intentional smiles with that of spontaneous smiles.


ieee international conference on automatic face & gesture recognition | 2008

Person-independent face tracking based on dynamic AAM selection

Akihiro Kobayashi; Junji Satake; Takatsugu Hirayama; Hiroaki Kawashima; Takashi Matsuyama

We have developed a high-precision method that selects an appropriate model of a video image in order to track an unknown face in front of a large display. Currently, Active Appearance Models (AAMs) are used to track non-rigid objects, such as a faces, because the models efficiently learn the correlation between shape and texture. The problem with an AAM is that when it tracks an unknown face, excessive training data increases tracking errors because there is an intermediate model size beyond which the reduction in fitting performance outweighs the gains from any improved representational power of the model. To increases the accuracy with which an unknown face is tracked, we built clustered models from training datasets and select a cluster that includes a face which is similar to the unknown face. Our method of clustering and cluster selecting is based on the Mutual Subspace Method (MSM). We demonstrated the effectiveness of our method by using the leave-one-out cross-validation.


international conference on pattern recognition | 2010

Gaze Probing: Event-Based Estimation of Objects Being Focused On

Ryo Yonetani; Hiroaki Kawashima; Takatsugu Hirayama; Takashi Matsuyama

We propose a novel method to estimate the object that a user is focusing on by using the synchronization between the movements of objects and a users eyes as a cue. We first design an event as a characteristic motion pattern, and we then embed it within the movement of each object. Since the users ocular reactions to these events are easily detected using a passive camera-based eye tracker, we can successfully estimate the object that the user is focusing on as the one whose movement is most synchronized with the users eye reaction. Experimental results obtained from the application of this system to dynamic content (consisting of scrolling images) demonstrate the effectiveness of the proposed method over existing methods.


asian conference on pattern recognition | 2013

Analysis of Soccer Coach's Eye Gaze Behavior

Atsushi Iwatsuki; Takatsugu Hirayama; Kenji Mase

How do people see a scene? To what do they pay attention in their field of view, and when? This depends on the observers knowledge, experience, and so on. This study compares the eye movements of an expert and novices, and extracts the skill-based differences in their gaze behaviors. In this paper, we focus on the gaze behaviors of a soccer coach and nonprofessional people while watching a video of a soccer game, and analyze the relationships between the eye movements and dynamic salient objects, that is, the ball and the players, in the video. The results show that, when the ball and some players are near either of the goals, the expert pays attention not to them but to the many other players in the middle of the soccer field. The findings of this study will constitute novel stepping stones for modeling a skillful viewing technique and useful knowledge that can be taught to novices.


International Journal of Vehicular Technology | 2013

Analysis of Temporal Relationships between Eye Gaze and Peripheral Vehicle Behavior for Detecting Driver Distraction

Takatsugu Hirayama; Kenji Mase; Kazuya Takeda

A car driver’s cognitive distraction is a main factor behind car accidents. One’s state of mind is subconsciously exposed as a reaction reflecting it by external stimuli. A visual event that occurs in front of the driver when a peripheral vehicle overtakes the driver’s vehicle is regarded as the external stimulus. We focus on temporal relationships between the driver’s eye gaze and the peripheral vehicle behavior. The analysis result showed that the temporal relationships depend on the driver’s state. In particular we confirmed that the timing of the gaze toward the stimulus under the distracted state induced by a music retrieval task using an automatic speech recognition system is later than that under a neutral state while only driving without the secondary cognitive task. This temporal feature can contribute to detecting the cognitive distraction automatically. A detector based on a Bayesian framework using this feature achieves better accuracy than one based on the percentage road center method.


international conference on intelligent transportation systems | 2012

Detection of driver distraction based on temporal relationship between eye-gaze and peripheral vehicle behavior

Takatsugu Hirayama; Kenji Mase; Kazuya Takeda

Ones state of mind is subconsciously exposed as a reaction reflecting it by external stimuli. In this work, we focus on a car drivers cognitive distraction, specifically by analyzing a drivers internal state induced during a music-retrieval task using an automatic speech-recognition system. A visual event that occurs in front of the driver when a peripheral vehicle overtakes the drivers vehicle is regarded as the external stimulus. The analysis result showed that the temporal relationship between the drivers eye gaze and the peripheral vehicle behavior depends on the drivers state. Specifically, we confirmed that the timing of the gaze toward the stimulus under the distracted state is later than under the neutral state without the secondary cognitive task. This temporal feature can contribute to the detection of the cognitive distraction automatically. A detector based on a Bayesian framework using this feature achieves better accuracy than one based on the percentage road center method.


IEEE MultiMedia | 2015

Viewpoint Sequence Recommendation Based on Contextual Information for Multiview Video

Xueting Wang; Takatsugu Hirayama; Kenji Mase

This automatic video sequence recommendation method selects optimal sets of context-dependent, high-quality viewpoints from multiview videos to enhance the viewing experience. The recommendation method bases its selections on user preferences.


international symposium on multimedia | 2014

Context-Dependent Viewpoint Sequence Recommendation System for Multi-view Video

Xueting Wang; Yuki Muramatu; Takatsugu Hirayama; Kenji Mase

Multi-view videos shot using multiple cameras are highly interested due to their considerable flexibility in enhancing the quality of our daily viewing experience, especially for large-scale events. However, the increase in the number of cameras burdens even experts on suitable viewpoint selection. Therefore, we propose in this paper an automatic viewpoint sequence recommendation system to support multi-view viewpoint selecting with a soccer game example. Unlike existing methods, our proposed system focuses on context-dependency using viewpoint evaluation and transition processes by two types of agents: a camera agent and a producer agent. The camera agent evaluates the view quality based on scene context such as positions of ball and players in given production context such as camera position and users preference. The producer agent selects the optimal set of viewpoints by taking account of the view quality and the production objectives. The context-dependent optimization has been performed to generate variable viewing patterns which are adequate to various scene and production contexts. Sequences generated by the system and the human selection were experimentally compared to confirm the effectiveness of our proposed system. Our recommendation system has the potential to satisfy both common and personal viewing preferences for sports games.


augmented human international conference | 2014

Video generation method based on user's tendency of viewpoint selection for multi-view video contents

Yuki Muramatsu; Takatsugu Hirayama; Kenji Mase

A multi-view video makes it possible for users to watch video contents, for example, live concerts or sports events, more freely from various viewpoints. However, the users need to select a camera that captures a scene from their own preferred viewpoint at each event. In this paper, we propose a video generation method based on the users View Tendency, which is a tendency of viewpoint selection according to the user-dependent interest for multi-view video content. The proposed method learns the View Tendency by Support Vector Machine (SVM) using several measures such as the geometric features of an object. Then, this method estimates the consistency of each viewpoint with the learned View Tendency and integrates the estimation results to obtain a temporal sequence of the viewpoints. The proposed method enables the users to reduce the burden of viewpoint selection and to watch the viewpoint sequence that reflects the interest as viewing assistance for the multi-view video content.


articulated motion and deformable objects | 2002

Face Recognition Based on Efficient Facial Scale Estimation

Takatsugu Hirayama; Yoshio Iwai; Masahiko Yachida

Facial recognition technology needs to be robust for arbitrary facial appearances because a face changes according to facial expressions and facial poses. In this paper, we propose a method which automatically performs face recognition for variously scaled facial images. The method performs flexible feature matching using features normalized for facial scale. For normalization, the facial scale is probabilistically estimated and is used as a scale factor of an improved Gabor wavelet transformation. We implement a face recognition system based on the proposed method and demonstrate the advantages of the system through facial recognition experiments. Our method is more efficient than any other and can maintain a high accuracy of face recognition for facial scale variations.

Collaboration


Dive into the Takatsugu Hirayama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge