Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroshi Tenmoto is active.

Publication


Featured researches published by Hiroshi Tenmoto.


Lecture Notes in Computer Science | 1998

MDL-Based Selection of the Number of Components in Mixture Models for Pattern Classification

Hiroshi Tenmoto; Mineichi Kudo; Masaru Shimbo

A new method is proposed for selection of the optimal number of components of a mixture model for pattern classification. We approximate a class-conditional density by a mixture of Gaussian components. We estimate the parameters of the mixture components by the EM (Expectation Maximization) algorithm and select the optimal number of components on the basis of the MDL (Minimum Description Length) principle. We evaluate the goodness of an estimated model in a trade-off between the number of the misclassified training samples and the complexity of the model.


Pattern Recognition | 1998

PIECEWISE LINEAR CLASSIFIERS WITH AN APPROPRIATE NUMBER OF HYPERPLANES

Hiroshi Tenmoto; Mineichi Kudo; Masaru Shimbo

Abstract A new method to construct a piecewise linear classifier is proposed. This method selects an appropriate number of hyperplanes of a piecewise linear classifier by MDL (Minimum Description Length) criterion. This method constructs the hyperplanes so as to keep the local error rate for a training set under a threshold. The threshold is determined automatically by the MDL criterion so as to avoid overfitting of the classifier to the training set. This method showed results better than those of a previous method in some experiments.


international conference on knowledge based and intelligent information and engineering systems | 1999

Determination of the number of components based on class separability in mixture-based classifiers

Hiroshi Tenmoto; Mineichi Kudo; Masaru Shimbo

We propose a novel method for determining the number of components in mixture-based classifiers. Each class-conditional probabilistic density function can be approximated well by the mixture of Gaussian components. However, the performance of this classifier depends on the number of components. In our proposed method, determination of the number of components is based on both probabilistic likelihood and class separability. The results of experiments confirmed the effectiveness and the property.


Lecture Notes in Computer Science | 2000

Selection of the Number of Components Using a Genetic Algorithm for Mixture Model Classifiers

Hiroshi Tenmoto; Mineichi Kudo; Masaru Shimbo

A genetic algorithm is employed in order to select the appropriate number of components for mixture model classifiers. In this classifier, each class-conditional probability density function can be approximated well using the mixture model of Gaussian distributions. Therefore, the classification performance of this classifier depends on the number of components by nature. In this method, the appropriate number of components is selected on the basis of class separability, while a conventional method is based on likelihood. The combination of mixture models is evaluated by a classification oriented MDL (minimum description length) criterion, and its optimization is carried out using a genetic algorithm. The effectiveness of this method is shown through the experimental results on some artificial and real datasets.


international conference on pattern recognition | 2016

Simultaneous visualization of samples, features and multi-labels

Mineichi Kudo; Keigo Kimura; Michal Haindl; Hiroshi Tenmoto

Visualization helps us to understand single-label and multi-label classification problems. In this paper, we show several standard techniques for simultaneous visualization of samples, features and multi-classes on the basis of linear regression and matrix factorization. The experiment with two real-life multi-label datasets showed that such techniques are effective to know how labels are correlated to each other and how features are related to labels in a given multi-label classification problem.


WSTST | 2005

Density- and Complexity-Regularization in Gaussian Mixture Bayesian Classifier

Hiroshi Tenmoto; Mineichi Kudo

We regularize Gaussian mixture Bayesian (GMB) classifier in terms of the following two points: 1) class-conditional probability density functions, and 2) complexity as a classifier. For the former, we employ the Bayesian regularization method proposed by Ormoneit and Tresp, which is derived from the maximum a posteriori (MAP) estimation framework. For the latter, we use a discriminative MDL-based model selection method proposed by us. In this paper, we optimize the hyperparameters in 1) and 2) simultaneously with respect to the discriminative MDL criterion, aiming to auto-configure the hyperparameter setting for the best classification performance. We show the effectiveness of the proposed method through some experiments on real datasets.


Lecture Notes in Computer Science | 2004

Classifier-Independent Visualization of Supervised Data Structures Using a Graph

Hiroshi Tenmoto; Yasukuni Mori; Mineichi Kudo

Supervised data structures in high dimensional feature spaces are displayed as graphs. The structure is analyzed by normal mixture distributions. The nodes of the graph correspond the mean vectors of the mixture distributions, and the location is carried out by Sammons nonlinear mapping. The thickness of the edges expresses the separability between the component distributions, which is determined by Kullback- Leibler divergence. From experimental results, it was confirmed that the proposed method can illustrate in which regions and to what extent it is difficult to classify samples correctly. Such visual information can be utilized for the improvement of the feature sets.


international conference on pattern recognition | 1998

A subclass-based mixture model for pattern recognition

Mineichi Kudo; Hiroshi Tenmoto; Satoru Sumiyoshi; Masaru Shimbo

A classifier based on a mixture model is proposed. The expectation maximisation algorithm for construction of a mixture density is sensitive to the initial densities. It is also difficult to determine the optimal number of component densities. In this study, we construct a mixture density on the basis of a hyper-rectangles found in the subclass method, in which the number of components is determined automatically. Experimental results show the effectiveness of this approach.


international conference on knowledge based and intelligent information and engineering systems | 1998

Appropriate initial component densities of mixture modeling for pattern recognition

Mineichi Kudo; F. Taniguchi; Hiroshi Tenmoto; Masaru Shimbo

Some initial component densities are compared in a mixture model for pattern recognition. The EM algorithm is widely adopted in construction of a mixture density for approximating a class-conditional density. However, the algorithm is very sensitive to the number of component densities and the initial component densities themselves. The initial component densities are obtained by a clustering method. We report the results of comparison between clustering methods yielding non-overlapping clusters and methods yielding overlapping clusters.


SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition | 2008

Soft Feature Selection by Using a Histogram-Based Classifier

Hiroshi Tenmoto; Mineichi Kudo

Collaboration


Dive into the Hiroshi Tenmoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Haindl

Academy of Sciences of the Czech Republic

View shared research outputs
Researchain Logo
Decentralizing Knowledge