Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihiko Hamamoto is active.

Publication


Featured researches published by Yoshihiko Hamamoto.


Pattern Recognition | 1998

A gabor filter-based method for recognizing handwritten numerals

Yoshihiko Hamamoto; Shunji Uchimura; Masanori Watanabe; Tetsuya Yasuda; Yoshihiro Mitani; Shingo Tomita

Abstract We study a Gabor-filter-based method for handwritten numeral character recognition. The Gabor filter is based on a multi-channel filtering theory for processing visual information in the early stages of the human visual systems. The performance of the Gabor-filter-based method is demonstrated on the ETL-1 database. Experimental results show that the artificial neural-network classifier achieved the error rate of 2.34% for a test set of 7000 characters. Therefore, the Gabor-filter-based method should be considered in recognition of handwritten numeric characters.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

A bootstrap technique for nearest neighbor classifier design

Yoshihiko Hamamoto; Shunji Uchimura; Shingo Tomita

A bootstrap technique for nearest neighbor classifier design is proposed. Our primary interest in designing a classifier is in small training sample size situations. Conventional bootstrapping techniques sample the training samples with replacement. On the other hand, our technique generates bootstrap samples by locally combining original training samples. The nearest neighbor classifier is designed on the bootstrap samples and is tested on the test samples independent of training samples. The performance of the proposed classifier is demonstrated on three artificial data sets and one real data set. Experimental results show that the nearest neighbor classifier designed on the bootstrap samples outperforms the conventional k-NN classifiers as well as the edited 1-NN classifiers, particularly in high dimensions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

On the behavior of artificial neural network classifiers in high-dimensional spaces

Yoshihiko Hamamoto; Shunji Uchimura; Shingo Tomita

It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classifier tends to increase as the number of features gets larger. In this paper, we discuss the generalization error of the artificial neural network (ANN) classifiers in high-dimensional spaces, under a practical condition that the ratio of the training sample size to the dimensionality is small. Experimental results show that the generalization error of ANN classifiers seems much less sensitive to the feature size than 1-NN, Parzen and quadratic classifiers.


Pattern Recognition | 1996

On the estimation of a covariance matrix in designing Parzen classifiers

Yoshihiko Hamamoto; Yasushi Fujimoto; Shingo Tomita

The design of the Parzen classifiers requires careful attention to the window-width as well as kernel covariance matrices. Although a considerable amount of effort has been devoted to the selection of the window-width, the problem of estimating kernel covariance matrices has received little attention in the past. In this paper we discuss the kernel covariance estimators for the design of the Parzen classifiers. We compare the performance of the Parzen classifiers based on several kernel covariance estimators such as the Toeplitz, Nesss and orthogonal expansion estimators on three artificial data sets. From experimental results, we recommend the use of the Toeplitz estimator, particularly in high-dimensional spaces.


Pattern Recognition | 1993

On a theoretical comparison between the orthonormal discriminant vector method and discriminant analysis

Yoshihiko Hamamoto; Taiho Kanaoka; Shingo Tomita

Abstract The performance of the orthonormal discriminant vector (ODV) method is discussed in comparison with discriminant analysis. The ODV method produces the features which maximize the Fisher criterion subject to the orthonormality of features. In contrast with discriminant analysis, the ODV method has no limitation on the maximum number of features to be extracted. From a theoretical viewpoint, it is proved that the ODV method is more powerful than discriminant analysis in terms of the Fisher criterion. The theoretical conclusion is experimentally verified using two real data sets.


international symposium on neural networks | 1993

Evaluation of artificial neural network classifiers in small sample size situations

Yoshihiko Hamamoto; Shunji Uchimura; Taiho Kanaoka; Shingo Tomita

Small-training sample size problems in artificial neural network classifier design are discussed. A comparison of the artificial neural network (ANN) and nonparametric statistical classifiers in small sample size situations is also presented in terms of the error probability.


international symposium on neural networks | 1995

Effects of the sample size in artificial neural network classifier design

Shunji Uchimura; Yoshihiko Hamamoto; Shingo Tomita

Discusses the effects of the sample size on the estimates of the error rate of the artificial neural network (ANN) classifiers. Experimental results show that the standard deviation of the estimated error rate of ANN classifiers is independent of the hidden unit size. In addition, it is shown that nevertheless the class distributions are Gaussian, ANN classifiers outperform the quadratic discriminant function when sizes of samples per class are much unequal.


international symposium on neural networks | 1995

Use of bootstrap samples in designing artificial neural network classifiers

Yoshihiro Mitani; Yoshihiko Hamamoto; Shingo Tomita

We propose a new bootstrap method for designing artificial neural network (ANN) classifiers. Moreover, the classification performance of ANN classifiers based on the new bootstrap method is demonstrated in small training sample size situations on the artificial data sets.


Archive | 1998

Comparison of Pruning Algorithms in Neural Networks

Yoshihiko Hamamoto; Toshinori Hase; Satoshi Nakai; Shingo Tomita

In order to select the right-sized network, many pruning algorithms have been proposed. One may ask which of the pruning algorithms is best in terms of the generalization error of the resulting artificial neural network classifiers. In this paper, we compare the performance of four pruning algorithms in small training sample size situations. A comparative study with artificial and real data suggests that the weight-elimination method proposed by Weigend et al. is best.


international symposium on neural networks | 1995

On the effect of the nonlinearity of the sigmoid function in artificial neural network classifiers

Shunji Uchimura; Yoshihiko Hamamoto; Shingo Tomita

Despite a considerable amount of recent directed towards pattern recognition applications of the artificial neural networks (ANNs), little quantitative information concerning the nonlinearity of the sigmoid function is available to a designer. In this paper, the authors study the effect of the nonlinearity of the sigmoid function on the generalization capability of the ANN classifiers trained with the backpropagation algorithm. Experimental results show that like the case of the network size, there exists the optimal degree of the nonlinearity for a particular problem.

Collaboration


Dive into the Yoshihiko Hamamoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge