Norio Baba
University of Tokushima
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norio Baba.
Neural Networks | 1989
Norio Baba
Abstract Back-propagation may be the most widely-used method to adapt artificial neural networks for pattern classification. However, an important limitation of this method is that it sometimes falls into a local minimum of the error function. In this paper, the random optimization method of Matyas and its modified algorithm are used to learn the weights and parameters in a neural network. Research indicates that these algorithms can be successfully utilized in order to find the global minimum of error function of neural networks.
Information Sciences | 1977
Norio Baba; Toshio Shoman; Yoshikazu Sawaragi
Abstract The purpose of this paper is to investigate the random optimization method which has been contrived for the minimization method. Concerning with the convergence of this method, Matyas gave interesting theorems. However, there is a questionable point in his proofs, and the assumptions used in them are too severe. In this paper, a modified theorem concerned with the convergence of the random optimization method is given.
systems man and cybernetics | 1975
Norio Baba; Yoshikazu Sawaragi
We propose a new nonstationary random environment R(C1(t,¿),...,Cr(t,¿)), where t represents time and ¿ ¿ ¿, ¿ being the supporting set of a probability measure space (¿,B,¿). Moreover, the learning performance of the Lr-1 scheme under R(C1(t,¿),..., Cr(t,¿)) is discussed.
International Journal of Systems Science | 1980
Norio Baba; T. Soeda; Toshio Shoman; Y. Sawaragi
Abstract An application of the stochastic automaton to the investment game is considered. It is shown that the use of the stochastic automaton with learning properties is an efficient method for the investment game.
systems man and cybernetics | 1985
Norio Baba
The learning behaviours of hierarchical structure automata operating in general multiteacher environments are considered. It is shown that the generalized version of the reinforcement algorithm is absolutely expedient.
International Journal of Systems Science | 1988
Norio Baba; Hiroshi Takeda; Takahiro Miyake
An interactive optimization algorithm for multi-objective programming that utilizes the random optimization method is proposed. It is shown that a personal computer can be used successfully in order to find an appropriate solution in a dialogue mode.
systems man and cybernetics | 1983
Norio Baba
Learning behaviors of a stochastic automaton operating in a multiteacher environment are considered. As a generalized form of the <i>L</i><sub>R-I</sub> scheme, the <i>GL</i><sub>R-I</sub> scheme is proposed as a reinforcement scheme in a multiteacher environment. It is shown that the <i>GL</i><sub>R-I</sub> scheme is absolutely expedient and ϵ-optimal in the general <i>n</i>-teacher environment. Learning behaviors of the <i>GL</i><sub>R-I</sub> scheme are simulated by computer and the results indicate the effectiveness of the <i>GL</i><sub>R-I</sub> scheme.
systems man and cybernetics | 1987
Norio Baba
Learning behaviors of hierarchical structure stochastic automata are considered in a nonstationary multiteacher environment. It is shown that an extended form of an algorithm proposed by M.A.L. Thathachar and K.R. Ramakrishnan (1981) ensures absolute expediency under some conditions. As a practical application of hierarchical structure stochastic automata, intelligent behavior is considered of robot manipulators going through a maze having a large number of gates that close with unknown rejecting probabilities. It is shown that hierarchical structure stochastic automata can be successfully utilized to let robot manipulators find the best way through the maze.
systems man and cybernetics | 1983
Norio Baba
Learning behaviours of variable-structure stochastic automata under a multiteacher environment are considered. The concepts of absolute expediency and ε-optimality in a single-teacher environment are extended by the introduction of an average weighted reward and are redefined for a multiteacher environment. As an extended form of the absolutely expedient learning algorithm, a general class of nonlinear learning algorithm, called the GAE scheme, is proposed as a reinforcement scheme in a multiteacher environment. It is shown that the GAE scheme is absolutely expedient and ε-optimal in the general n-teacher environment. Learning behaviours of the GAE scheme in various multiteacher environments are simulated by computer and the results indicate the effectiveness of the GAE scheme.
Advanced Robotics | 1989
Norio Baba; Kiyofumi Kamimae
The collision avoidance problem of a robot manipulator whose workspace includes moving objects is considered in this paper. It is shown that the proposed computer simulation system can be used in a dialogue mode with a designer to check whether or not collision with obstacles is avoided and to determine the appropriate movement.