Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haruhisa Takahashi is active.

Publication


Featured researches published by Haruhisa Takahashi.


IEEE Transactions on Neural Networks | 1996

Towards more practical average bounds on supervised learning

Hanzhong Gu; Haruhisa Takahashi

In this paper, we describe a method which enables us to study the average generalization performance of learning directly via hypothesis testing inequalities. The resulting theory provides a unified viewpoint of average-case learning curves of concept learning and regression in realistic learning problems not necessarily within the Bayesian framework. The advantages of the theory are that it alleviates the practical pessimism frequently claimed for the results of the Vapnik-Chervonenkis (VC) theory and its alike, and provides general insights into generalization. Besides, the bounds on learning curves are directly related to the number of adjustable system weights. Although the theory is based on an approximation assumption, and cannot apply to the worst-case learning setting, the precondition of the assumption is mild, and the approximation itself is only a sufficient condition for the validity of the theory. We illustrate the results with numerical simulations, and apply the theory to examining the generalization ability of combination of neural networks.


Neural Computation | 2000

Exponential or Polynomial Learning Curves? Case-Based Studies

Hanzhong Gu; Haruhisa Takahashi

Learning curves exhibit a diversity of behaviors such as phase transition. However, the understanding of learning curves is still extremely limited, and existing theories can give the impression that without empirical studies (e.g., cross validation), one can probably do nothing more than qualitative interpretations. In this note, we propose a theory of learning curves based on the idea of reducing learning problems to hypothesis-testing ones. This theory provides a simple approach that is potentially useful for predicting and interpreting (a diversity of) learning curve behaviors qualitatively and quantitatively, and it applies to finite training sample size and finite learning machine and for learning situations not necessarily within the Bayesian framework. We illustrate the results by examining some exponential learning curve behaviors observed in Cohn and Tesauro (1992)s experiment.


IEEE Transactions on Neural Networks | 1998

A tight bound on concept learning

Haruhisa Takahashi; Hanzhong Gu

A tight bound on the generalization performance of concept learning is shown by a novel approach. Unlike the existing theories, the new approach uses no assumption on large sample size as in Bayesian approach and does not consider the uniform learnability as in the VC dimension analysis. We analyze the generalization performance of some particular learning algorithm that is not necessarily well behaved, in the hope that once learning curves or sample complexity of this algorithm is obtained, it is applicable to real learning situations. The result is expressed in a dimension called Boolean interpolation dimension, and is tight in the sense that it meets the lower bound requirement of Baum and Haussler. The Boolean interpolation dimension is not greater than the number of modifiable system parameters, and definable for almost all the real-world networks such as backpropagaton networks and linear threshold multilayer networks. It is shown that the generalization error follows from a beta distribution of parameters m, the number of training examples, and d, the Boolean interpolation dimension. This implies that for large d, the learning results tend to the average-case result, known as the self-averaging property of the learning. The bound is shown to be applicable to the practical learning algorithms that can be modeled by Gibbs algorithm with a uniform prior. The result is also extended to the case of inconsistent learning.


industrial and engineering applications of artificial intelligence and expert systems | 2002

Learning Capability: Classical RBF Network vs. SVM with Gaussian Kernel

Rameswar Debnath; Haruhisa Takahashi

The Support Vector Machine (SVM) has recently been introduced as a new learning technique for solving variety of real-world applications based on statistical learning theory. The classical Radial Basis Function (RBF) network has similar structure as SVM with Gaussian kernel. In this paper we have compared the generalization performance of RBF network and SVM in classification problems. We applied Lagrangian differential gradient method for training and pruning RBF network. RBF network shows better generalization performance and computationally faster than SVM with Gaussian kernel, specially for large training data sets.


Systems and Computers in Japan | 2003

Speech recognition using recurrent neural prediction model

Toru Uchiyama; Haruhisa Takahashi

The neural prediction model (NPM) proposed by Iso and Watanabe is a successful example of a speech recognition neural network with a high recognition rate. This model uses multilayer perceptrons for pattern prediction (not for pattern recognition), and achieves a recognition rate as high as 99.8% for speaker-independent isolated words. This paper proposes a recurrent neural prediction model (RNPM), and a recurrent network architecture for this model. The proposed model very significantly reduces the size of the network, with as high a recognition rate as the original model, and with a high efficiency of learning, for speaker-independent isolated words.


Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop (Cat. No.00TH8501) | 2000

EEG classification by Autocorrelation-Pulse in left and right motor imaginary data

Irak V. Mayer; Haruhisa Takahashi; Sakamoto K

This paper proposes a classification method for imaginary right and left motor EEG using a new algorithm named Autocorrelation-Pulse (AP). This algorithm is based on the spatiotemporal pulse patterns generated from the autocorrelation values in the ongoing EEG data. A backpropagation feedforward neural network was used for classification. The structure of the network preserves the spatio-temporal characteristics of the signal. Simulation results show that the classification accuracy can reach 100% on each subject and 91% over all subjects when the correct pair of electrodes is selected.


international symposium on neural networks | 2002

Covariance phasor neural network

Haruhisa Takahashi

We present a phase covariance model that can well represent stimulus intensity as well as feature binding (i.e., covariance). The model is represented by complex neural equations, which is a mean field model of stochastic neural model such as the Boltzman machine and sigmoid belief networks. The covariance model can represent covariance between two units of stochastic machines as cosine of the phase difference. This enables us to calculate the covariance between two units, in a deterministic manner as well as average activation. The covariance model could give an elaborate mean field approximation, and to calculate higher moments we have to invoke a higher order mean field model. The covariance Hebbian self-organizing rule and Boltzman learning rule are then investigated on this model.


international conference on neural information processing | 2002

Covariance phasor neural network as a mean field model

Haruhisa Takahashi

Covariance model can represent covariance between two units of stochastic machines as cosine of the phase difference. This enables us to calculate the covariance between two units in a deterministic manner as well as average activation. The covariance model could give an elaborate mean field approximation without invoking a higher order mean field model. A covariance Hebbian self organizing rule and Boltzmann learning rule are then investigated on this model.


Electromyography and clinical neurophysiology | 2001

Imaginary motor movement EEG classification by Accumulative-Autocorrelation-Pulse.

Mayer; Haruhisa Takahashi; Sakamoto K


international conference on machine learning and applications | 2003

A Fast Learning Decision-Based SVM for Multi-Class Problems.

Rameswar Debnath; N. Takahide; Haruhisa Takahashi

Collaboration


Dive into the Haruhisa Takahashi's collaboration.

Top Co-Authors

Avatar

Hanzhong Gu

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sakamoto K

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

N. Takahide

University of Electro-Communications

View shared research outputs
Researchain Logo
Decentralizing Knowledge