Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tetsu Matsukawa is active.

Publication


Featured researches published by Tetsu Matsukawa.


computer vision and pattern recognition | 2016

Hierarchical Gaussian Descriptor for Person Re-identification

Tetsu Matsukawa; Takahiro Okabe; Einoshin Suzuki; Yoichi Sato

Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification. In this paper, we present a novel descriptor based on a hierarchical distribution of pixel features. A hierarchical covariance descriptor has been successfully applied for image classification. However, the mean information of pixel features, which is absent in covariance, tends to be major discriminative information of person images. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, we model the region as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, unlike the hierarchical covariance descriptor, the proposed descriptor can model both the mean and the covariance information of pixel features properly. The results of experiments conducted on five databases indicate that the proposed descriptor exhibits remarkably high performance which outperforms the state-of-the-art descriptors for person re-identification.


international conference on pattern recognition | 2016

Person re-identification using CNN features learned from combination of attributes

Tetsu Matsukawa; Einoshin Suzuki

This paper presents fine-tuned CNN features for person re-identification. Recently, features extracted from top layers of pre-trained Convolutional Neural Network (CNN) on a large annotated dataset, e.g., ImageNet, have been proven to be strong off-the-shelf descriptors for various recognition tasks. However, large disparity among the pre-trained task, i.e., ImageNet classification, and the target task, i.e., person image matching, limits performances of the CNN features for person re-identification. In this paper, we improve the CNN features by conducting a fine-tuning on a pedestrian attribute dataset. In addition to the classification loss for multiple pedestrian attribute labels, we propose new labels by combining different attribute labels and use them for an additional classification loss function. The combination attribute loss forces CNN to distinguish more person specific information, yielding more discriminative features. After extracting features from the learned CNN, we apply conventional metric learning on a target re-identification dataset for further increasing discriminative power. Experimental results on four challenging person re-identification datasets (VIPeR, CUHK, PRID450S and GRID) demonstrate the effectiveness of the proposed features.


Pattern Recognition | 2012

Image representation for generic object recognition using higher-order local autocorrelation features on posterior probability images

Tetsu Matsukawa; Takio Kurita

This paper presents a novel image representation method for generic object recognition by using higher-order local autocorrelations on posterior probability images. The proposed method is an extension of the bag-of-features approach to posterior probability images. The standard bag-of-features approach is approximately thought of as a method that classifies an image to a category whose sum of posterior probabilities on a posterior probability image is maximum. However, by using local autocorrelations of posterior probability images, the proposed method extracts richer information than the standard bag-of-features. Experimental results reveal that the proposed method exhibits higher classification performances than the standard bag-of-features method.


international conference on computer vision | 2010

Appearance-based smile intensity estimation by cascaded support vector machines

Keiji Shimada; Tetsu Matsukawa; Yoshihiro Noguchi; Takio Kurita

Facial expression recognition is one of the most challenging research area in the image recognition field and has been studied actively for a long time. Especially, we think that smile is important facial expression to communicate well between human beings and also between human and machines. Therefore, if we can detect smile and also estimate its intensity at low calculation cost and high accuracy, it will raise the possibility of inviting many new applications in the future. In this paper, we focus on smile in facial expressions and study feature extraction methods to detect a smile and estimate its intensity only by facial appearance information (Facial parts detection, not required). We use Local Intensity Histogram (LIH), Center-Symmetric Local Binary Pattern (CS-LBP) or features concatenated LIH and CS-LBP to train Support Vector Machine (SVM) for smile detection. Moreover, we construct SVM smile detector as a cascaded structure both to keep the performance and reduce the calculation cost, and estimate the smile intensity by posterior probability. As a consequence, we achieved both low calculation cost and high performance with practical images and we also implemented the proposed methods to the PC demonstration system.


international conference on pattern recognition | 2010

Action Recognition Using Three-Way Cross-Correlations Feature of Local Moton Attributes

Tetsu Matsukawa; Takio Kurita

This paper proposes a spatio-temporal feature using three-way cross-correlations of local motion attributes for action recognition. Recently, the cubic higher-order local auto-correlations (CHLAC) feature has been shown high classification performances for action recognition. In previous researches, CHLAC feature was applied to binary motion image sequences that indicates moving or static points. However, each binary motion image lost informations about the type of motion such as timing of change or motion direction. Therefore, we can improve the classification accuracy further by extending CHLAC to multivalued motion image sequences that considered several types of local motion attributes. The proposed method is also viewed as an extension of popular bag-of-features approach. Experimental results using two datasets shows proposed method outperformed CHLAC features and bag-of-features approach.


international conference on pattern recognition | 2014

Person Re-identification via Discriminative Accumulation of Local Features

Tetsu Matsukawa; Takahiro Okabe; Yoichi Sato

Metric learning to learn a good distance metric for distinguishing different people while being insensitive to intra-person variations is widely applied to person re-identification. In previous works, local histograms are densely sampled to extract spatially localized information of each person image. The extracted local histograms are then concatenated into one vector that is used as an input of metric learning. However, the dimensionality of such a concatenated vector often becomes large while the number of training samples is limited. This leads to an over fitting problem. In this work, we argue that such a problem of over-fitting comes from that it is each local histogram dimension (e.g. color brightness bin) in the same position is treated separately to examine which part of the image is more discriminative. To solve this problem, we propose a method that analyzes discriminative image positions shared by different local histogram dimensions. A common weight map shared by different dimensions and a distance metric which emphasizes discriminative dimensions in the local histogram are jointly learned with a unified discriminative criterion. Our experiments using four different public datasets confirmed the effectiveness of the proposed method.


asian conference on computer vision | 2009

Image classification using probability higher-order local auto-correlations

Tetsu Matsukawa; Takio Kurita

In this paper, we propose a novel method for generic object recognition by using higher-order local auto-correlations on probability images The proposed method is an extension of bag-of-features approach to posterior probability images Standard bag-of-features is approximately thought as sum of posterior probabilities on probability images, and spatial co-occurrences of posterior probability are not utilized Thus, its descriptive ability is limited However, using local auto-correlations of probability images, the proposed method extracts richer information than the standard bag-of-features Experimental results show the proposed method is enable to have higher classification performances than the standard bag-of-features.


international joint conference on computer vision imaging and computer graphics theory and applications | 2018

One-class selective transfer machine for personalized anomalous facial expression detection

Hirofumi Fujita; Tetsu Matsukawa; Einoshin Suzuki

An anomalous facial expression is a facial expression which scarcely occurs in daily life and coveys cues about an anomalous physical or mental condition. In this paper, we propose a one-class transfer learning method for detecting the anomalous facial expressions. In facial expression detection, most articles propose generic models which predict the classes of the samples for all persons. However, people vary in facial morphology, e.g., thick versus thin eyebrows, and such individual differences often cause prediction errors. While a possible solution would be to learn a single-task classifier from samples of the target person only, it will often overfit due to the small sample size of the target person in real applications. To handle individual differences in anomaly detection, we extend Selective Transfer Machine (STM) (Chu et al., 2013), which learns a personalized multi-class classifier by re-weighting samples based on their proximity to the target samples. In contrast to related methods for personalized models on facial expressions, including STM, our method learns a one-class classifier which requires only one-class target and source samples, i.e., normal samples, and thus there is no need to collect anomalous samples which scarcely occur. Experiments on a public dataset show that our method outperforms generic and single-task models using one-class SVM, and a state-of-the-art multi-task learning method.


Journal of Intelligent Information Systems | 2018

Experimental validation for N -ary error correcting output codes for ensemble learning of deep neural networks

Kaikai Zhao; Tetsu Matsukawa; Einoshin Suzuki

N-ary error correcting output codes (ECOC) decompose a multi-class problem into simpler multi-class problems by splitting the classes into N subsets (meta-classes) to form an ensemble of N-class classifiers and combine them to make predictions. It is one of the most accurate ensemble learning methods for traditional classification tasks. Deep learning has gained increasing attention in recent years due to its successes on various tasks such as image classification and speech recognition. However, little is known about N-ary ECOC with deep neural networks (DNNs) as base learners, probably due to the long computation time. In this paper, we show by experiments that N-ary ECOC with DNNs as base learners generally exhibits superior performance compared with several state-of-the-art ensemble learning methods. Moreover, our work contributes to a more efficient setting of the two crucial hyperparameters of N-ary ECOC: the value of N and the number of base learners to train. We also explore valuable strategies for further improving the accuracy of N-ary ECOC.


pervasive technologies related to assistive environments | 2015

Toward a platform for collecting, mining, and utilizing behavior data for detecting students with depression risks

Einoshin Suzuki; Yutaka Deguchi; Tetsu Matsukawa; Shin Ando; Hiroaki Ogata; Masanori Sugimoto

In this paper, we present our plan for constructing a platform for collecting, mining, and utilizing behavior data for detecting students with depression risks. Unipolar depression makes a large contribution to the burden of disease, being at the first place in middle- and high-income countries. We survey descriptors of depressions and then design a data collection platform in a classroom based on the assumption that such descriptors are also effective to students with depression risks. Visual, acoustic, and e-learning data are chosen for collection and various issues including devices, preprocessing, and consent agreements are investigated. We also show two kinds of utilization scenarios of the collected data and introduce several techniques and methods we developed for feature extraction and early detection.

Collaboration


Dive into the Tetsu Matsukawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takahiro Okabe

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenji Nishida

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge