Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taichi Hayasaka is active.

Publication


Featured researches published by Taichi Hayasaka.


Neural Networks | 2001

Upper bound of the expected training error of neural network regression for a Gaussian noise sequence

Katsuyuki Hagiwara; Taichi Hayasaka; Naohiro Toda; Shiro Usui; Kazuhiro Kuno

In neural network regression problems, often referred to as additive noise models, NIC (Network Information Criterion) has been proposed as a general model selection criterion to determine the optimal network size with high generalization performance. Although NIC has been derived using asymptotic expansion, it has been pointed out that this technique cannot be applied under the assumption that a target function is in a family of assumed networks and the family is not minimal for representing the target true function, i.e. the overrealizable case, in which NIC reduces to the well-known AIC (Akaike Information Criterion) and others depending on a loss function. Because NIC is the unbiased estimator of generalization error based on training error, it is required to derive the expectations of errors for neural networks for such cases. This paper gives upper bounds of the expectations of training errors with respect to the distribution of training data, which we call the expected training error, for some types of networks under the squared error loss. In the overrealizable case, because the errors are determined by fitting properties of networks to noise components, including in data, the target set of data is taken to be a Gaussian noise sequence. For radial basis function networks and 3-layered neural networks with bell shaped activation function in the hidden layer, the expected training error is bounded above by sigma2* - 2nsigma2*logT/T, where sigma2* is the variance of noise, n is the number of basis functions or the number of hidden units and T is the number of data. Furthermore, for 3-layered neural networks with sigmoidal activation function in the hidden layer, we obtained the upper bound of sigma2* - O(log T/T) when n > 2. If the number of data is large enough, these bounds of the expected training error are smaller than sigma2* - N(n)sigma2*/T as evaluated in NIC, where N(n) is the number of all network parameters.


Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop | 1996

On the least square error and prediction square error of function representation with discrete variable basis

Taichi Hayasaka; N. Toda; S. Usui; K. Hagiwara

One of the most important features of 3-layered neural networks is the adaptability of the basis functions. In this paper, in order to focus on the adaptability in a context of the regression or curve-fitting, we restricted our attention to function representation in which the basis functions are modified according to the associated discrete parameters. For such function representation, we derived the expectations of the least square error and prediction square error with respect to the distribution of a set of samples using the extreme value theory, provided that the given set of samples is an independent Gaussian noise sequence and the basis functions satisfy an appropriate orthonormality condition.


international symposium on neural networks | 2003

Evolution and adaptation of neural networks

Paulito P. Palmes; Taichi Hayasaka; Shiro Usui

One important issue in developing dynamic algorithms that changes the structure and weights of ANN (artificial neural networks) is how to achieve a proper balance between network complexity and its generalization capability. Typical hybrid approaches to address this problem incorporates EA strategy using a population of backpropagation networks. Since individuals undergo backpropagation networks. Since individuals undergo backpropagation training, this approach is inefficient and inherits the pitfalls of gradient learning. SEPA (structure evolution and parameter adaptation) addresses these issues using an encoding scheme where network weights and connections are encoded in matrices of real numbers. Network parameters are locally encoded and undergo local adaptation with fitness evaluation consisting mainly of fast feed-forward matrix operations that can be implemented in parallel or distributed environment. Experimental results show that SEPAs strategy produces optimal network structure with fast convergence, high consistency, and good generalization capability.


genetic and evolutionary computation conference | 2003

SEPA: structure evolution and parameter adaptation in feed-forward neural networks

Paulito P. Palmes; Taichi Hayasaka; Shiro Usui

In developing algorithms that dynamically changes the structure and weights of ANN (Artificial Neural Networks), there must be a proper balance between network complexity and its generalization capability. SEPA addresses these issues using an encoding scheme where network weights and connections are encoded in matrices of real numbers. Network parameters are locally encoded and locally adapted with fitness evaluation consisting mainly of fast feed-forward operations. Experimental results in some well-known classification problems demonstrate SEPAs high consistency performance in classification, fast convergence, and good optimality of structure.


international work-conference on artificial and natural neural networks | 2001

Analysis on the Viewpoint Dependency in 3-D Object Recognition by Support Vector Machines

Taichi Hayasaka; Eiichi Ohnishi; Shigeki Nakauchi; Shiro Usui

In 3-D object recognition in human, the recognition performance across viewpoint changes is divided into 2 types: viewpoint-dependent and viewpoint-invariant. We analyzed the viewpoint dependency of objects under the theory of image-based object representation in human brain (Poggio & Edelman 1990, Tarr 1995) using Support Vector Machines (Vapnik 1995). We suggest from such computational approach that the features of object images between different viewpoints are major factors for human performance in 3-D object recognition.


international conference on neural information processing | 1999

Determination of the number of hidden units from a statistical viewpoint

Taichi Hayasaka; Katsuyuki Hagiwara; N. Toda; Shiro Usui

One of the important problems for 3-layered neural networks (3-LNN) is to determine the optimal network structure with high generalization ability. Although this can be formulated in terms of a statistical model selection, there remains a problem in applying traditional criteria for 3-LNN. We suggest the type of effective criteria for the model selection problem of 3-LNN by analyzing the statistical properties of some simplified nonlinear models. Results of numerical experiments are also presented.


IEEE Transactions on Neural Networks | 2005

Mutation-based genetic neural network

Paulito P. Palmes; Taichi Hayasaka; Shiro Usui


The Brain & Neural Networks | 1997

On the Estimation and Prediction Errors of Function Representation with Orthonormal Discrete Variable Basis in Regression Model.

Taichi Hayasaka; Katsuyuki Hagiwara; Naohiro Toda; Shiro Usui


IEICE Transactions on Information and Systems | 2003

On the Statistical Properties of Least Squares Estimators of Layered Neural Networks

Masashi Kitahara; Taichi Hayasaka; Naohiro Toda; Shiro Usui


Systems and Computers in Japan | 2006

An analysis of viewpoint dependency in three-dimensional object recognition using support vector machines

Taichi Hayasaka; Shigeki Nakauchi; Shiro Usui

Collaboration


Dive into the Taichi Hayasaka's collaboration.

Top Co-Authors

Avatar

Shiro Usui

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Naohiro Toda

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paulito P. Palmes

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Masashi Kitahara

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shigeki Nakauchi

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Eiichi Ohnishi

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge