Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masanao Ohbayashi is active.

Publication


Featured researches published by Masanao Ohbayashi.


international symposium on neural networks | 1995

Universal learning network and computation of its higher order derivatives

Kotaro Hirasawa; Masanao Ohbayashi; Junichi Murata

In this paper, the universal learning network (ULN) is presented, which models and controls large scale complicated systems such as industrial plants, economics, social and life phenomena. The computing method of higher order derivatives of ULN is derived in order to obtain the learning ability. The basic idea of ULN is that large scale complicated systems can be modeled by the network which consists of nonlinearly operated nodes and branches which may have arbitrary time delays including zero or minus ones. It is shown that the first order derivatives of ULN with sigmoid functions and one sampling time delays correspond to the backpropagation learning algorithm of recurrent neural networks.


systems man and cybernetics | 1998

Learning Petri network and its application to nonlinear system control

Kotaro Hirasawa; Masanao Ohbayashi; Singo Sakai; Jinglu Hu

According to recent knowledge of brain science it is suggested that there exists functions distribution, which means that specific parts exist in the brain for realizing specific functions. This paper introduces a new brain-like model called Learning Petri Network (LPN) that has the capability of functions distribution and learning. The idea is to use Petri net to realize the functions distribution and to incorporate the learning and representing ability of neural network into the Petri net. The obtained LPN can be used in the same way as a neural network to model and control dynamic systems, while it is distinctive to a neural network in that it has the capability of functions distribution. An application of the LPN to nonlinear crane control systems is discussed. It is shown via numerical simulations that the proposed LPN controller has superior performance to the commonly-used neural network one.


international symposium on neural networks | 1996

Forward propagation universal learning network

Kotaro Hirasawa; Masanao Ohbayashi; Masaru Koga; Masaaki Harada

In this paper, a computing method of higher order derivatives of universal learning network (ULN) is derived by forward propagation, which models and controls large scale complicated systems such as industrial plants, economic, social and life phenomena. It is shown by comparison that forward propagation is preferable to backward propagation in computation time when higher order derivatives with respect to time invariant parameters should be calculated. It is also shown that first order derivatives of ULN with sigmoid functions and one sampling time delays correspond to the forward propagation learning algorithm of the recurrent neural networks. Furthermore, it is suggested that robust control and chaotic control can be realized if higher order derivatives are available.


international symposium on neural networks | 1995

Chaos control using second order derivatives of universal learning network

Masaru Koga; Kotaro Hirasawa; Junich Murata; Masanao Ohbayashi

A method is proposed for controlling chaotic phenomena on a universal learning network (ULN). The chaos control method proposed here is a novel one. Generation and die-out of chaotic phenomena are controlled by changing the Lyapunov number of the ULN, which is accomplished by adjusting ULN parameters so as to minimize a criterion function that is the difference between the desired Lyapunov number and its actual value. Both a gradient method utilizing second order derivatives of the ULN and a random search method are adopted to optimize the parameters. Control of generation and die-out of chaotic phenomena are easily realized in simulations.


international symposium on neural networks | 1996

Robust learning control using universal learning network

Masanao Ohbayashi; Kotaro Hirasawa; Junichi Murata; Masaaki Harada

Characteristics of control system design using universal learning network (ULN) are that a system to be controlled and a controller are both constructed by ULN, and that the controller is best tuned through learning. ULN has the same generalization ability as a neural network (NN). Thus the controller constructed by ULN is able to control the system in a favourable way under the condition different from the condition of the control system at learning stage, but stability can not be realized sufficiently. In this paper, we propose a robust learning control method using ULN and second order derivatives of ULN. The proposed method can realize better performance and robustness than the commonly used NN. The robust learning control considered here is defined as follows: even though the initial values of the node outputs change from those at learning, the control system is able to reduce its influence on other node outputs and can control the system in a preferable way as in the case of no variation.


international symposium on neural networks | 1999

A new indirect encoding method with variable length gene code to optimize neural network structures

Kunikazu Kobayashi; Masanao Ohbayashi

A new encoding method for optimizing neural network structures is proposed. It is based on an indirect encoding method with variable length gene code. The search ability for finding an optimal solution is higher than the direct encoding methods because redundant information in gene code is reduced and the search space is also reduced. The proposed method easily operates adding and deleting hidden units. The performance of the proposed method is evaluated through computer simulations.


international symposium on neural networks | 1998

A new random search method for neural network learning-RasID

Jinglu Hu; Kotaro Hirasawa; Junichi Mutata; Masanao Ohbayashi; Yurio Eki

This paper presents a novel random searching scheme called RasID for neural networks training. The idea is to introduce a sophisticated probability density function (PDF) for generating search vector. The PDF provides two parameters for realizing intensified search in the area where it is likely to find good solutions locally or diversified search in order to escape from a local minimum based on the success-failure of the past search. Gradient information is used to improve the search performance. The proposed scheme is applied to layered neural networks training and is benchmarked against other deterministic and nondeterministic methods.


systems man and cybernetics | 1997

Probabilistic universal learning network

Kotaro Hirasawa; Masanao Ohbayashi; Junichi Murata

Universal learning network (ULN) is a framework for the modelling and control of nonlinear large-scale complex systems such as physical, social and economical systems. A generalized learning algorithm has been proposed for ULN, which can be used in a unified manner for almost all kinds of networks such as static/dynamic networks, layered/recurrent type networks, time delay neural networks and multibranch networks. But, as the signals transmitted through the ULN should be deterministic, the stochastic signals which are contaminated with noise can not be propagated through the ULN. In this paper, Probabilistic Universal Learning Network (PrULN) is presented, where a new learning algorithm to optimize the criterion function is defined on the stochastic dynamic systems. By using PrULN, the following are expected:(1) the generalization capability of the learning networks will be improved, (2) more sophisticated stochastic control will be obtained than the conventional stochastic control, (3) designing problems for the complex systems such as chaotic systems are expected to develop, whereas now the main research topics for the chaotic systems are only the analysis of the systems.


systems man and cybernetics | 1996

Stability theory of universal learning network

Kotaro Hirasawa; Masanao Ohbayashi; Masaru Koga; Naohiro Kusumi

Higher order derivatives of the universal learning network (ULN) has been previously derived by forward and backward propagation computing methods, which can model and control the large scale complicated systems such as industrial plants, economic, social and life phenomena. In this paper, a new concept of nth order asymptotic orbital stability for the ULN is defined by using higher order derivatives of ULN and a sufficient condition of asymptotic orbital stability for ULN is derived. It is also shown that if 3rd order asymptotic orbital stability for a recurrent neural network is proved, higher order asymptotic orbital stability than 3rd order is guaranteed.


systems man and cybernetics | 1997

Evaluation of multi-layered RBF networks

Kotaro Hirasawa; T. Matsuoka; Masanao Ohbayashi; Junichi Murata

In this paper, an investigation into the performance of multilayered radial basis functions (RBF) networks is conducted which use Gaussian function in place of sigmoidal function in multilayered neural networks (NNs). The focus is on the difference of approximation abilities between multilayered RBF networks and NNs. A function approximation problem is employed to evaluate the performance of multilayered RBF networks, and several types of different functions are used as the functions to be approximated. Gradient method is employed to optimize the parameters including centers, widths, and linear connection weights to the output nodes. It is shown from the result that RBF does not always have significant advantages over sigmoidal functions when they are used in multilayered networks.

Collaboration


Dive into the Masanao Ohbayashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge