Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katsunari Shibata is active.

Publication


Featured researches published by Katsunari Shibata.


IEEE Journal of Solid-state Circuits | 1993

A self-learning digital neural network using wafer-scale LSI

Moritoshi Yasunaga; Noboru Masuda; Masayoshi Yagyu; Mitsuo Asai; Katsunari Shibata; Minoru Yamada; Takahiro Sakaguchi; Masashi Hashimoto

A large-scale, dual-network architecture using wafer-scale integration (WSI) technology is proposed. By using 0.8 mu m CMOS technology, up to 144 self-learning digital neurons were integrated on each of eight 5 in silicon wafers. Neural functions and the back-propagation (BP) algorithm were mapped to digital circuits. The complete hardware system packaged more than 1000 neurons within a 30 cm cube. The dual-network architecture allowed high-speed learning at more than 2 gigaconnections updated per second (GCUPS). The high fault tolerance of the neural network and proposed defect-handling techniques overcame the yield problem of WSI. This hardware can be connected to a host workstation and used to simulating a wide range of artificial neural networks. Signature verification and stock price prediction have already been demonstrated with this hardware. >


international symposium on neural networks | 1991

A self-learning neural network composed of 1152 digital neurons in wafer-scale LSIs

Moritoshi Yasunaga; Noboru Masuda; Masayoshi Yagyu; Mitsuo Asai; Katsunari Shibata; Minoru Yamada; Takahiro Sakaguchi; Masashi Hashimoto

The design, fabrication, and evaluation of a compact self-learning neural network made up of more than 1000 neurons are described. A time-sharing bus architecture decreases the number of circuits required and makes possible flexible and expandable networks. Neural functions and the back propagation (BP) algorithm were mapped to binary digital circuits. A dual-network architecture allows high-speed learning. This hardware can be connected to a host workstation and used for a wide range of artificial neural networks. Signature verification and stock price prediction have already been demonstrated with this hardware. The peak learning speed was about 10 times faster than BP simulation by an S-820 Hitachi supercomputer.<<ETX>>


Archive | 2011

Emergence of Intelligence through Reinforcement Learning with a Neural Network

Katsunari Shibata

“There exist many robots who faithfully execute given programs describing the way of image recognition, action planning, control and so forth. Can we call them intelligent robots?” In this chapter, the author who has had the above skepticism describes the possibility of the emergence of intelligence or higher functions by the combination of Reinforcement Learning (RL) and a Neural Network (NN), reviewing his works up to now.


international conference on neural information processing | 2008

Contextual Behaviors and Internal Representations Acquired by Reinforcement Learning with a Recurrent Neural Network in a Continuous State and Action Space Task

Hiroki Utsunomiya; Katsunari Shibata

For the progress in developing human-like intelligence in robots, autonomous and purposive learning of adaptive memory function is significant. The combination of reinforcement learning (RL) and recurrent neural network (RNN) seems promising for it. However, it has not been applied to a continuous state-action space task, nor has its internal representations been analyzed in depth. In this paper, in a continuous state-action space task, it is shown that a robot learned to memorize necessary information and to behave appropriately according to it even though no special technique other than RL and RNN was utilized. Three types of hidden neurons that seemed to contribute to remembering the necessary information were observed. Furthermore, by manipulate them, the robot changed its behavior as if the memorized information was forgotten or swapped. That makes us feel a potential towards the emergence of higher functions in this very simple learning system.


international symposium on neural networks | 2011

Discovery of pattern meaning from delayed rewards by reinforcement learning with a recurrent neural network

Katsunari Shibata; Hiroki Utsunomiya

In this paper, by the combination of reinforcement learning and a recurrent neural network, the authors try to provide an explanation for the question: why humans can discover the meaning of patterns and acquire appropriate behaviors based on it. Using a system with a real movable camera, it is demonstrated in a simple task in which the system discovers pattern meaning from delayed rewards by reinforcement learning with a recurrent neural network. When the system moves its camera to the direction of an arrow presented on a display, it can get a reward. One kind of arrow is chosen randomly among four kinds at each episode, and the input of the network is 1,560 visual signals from the camera. After learning, the system could move its camera to the arrow direction. It was found that some hidden neurons represented the arrow direction not depending on the presented arrow pattern and kept it after the arrow disappeared from the image, even though no arrow was seen when it was rewarded and no one told the system that the arrow direction is important to get the reward. Generalization to some new arrow patterns and associative memory function also can be seen to some extent.


intelligent robots and systems | 2000

Prosthetic hand control based on torque estimation from EMG signals

Satoshi Morita; Katsunari Shibata; Xin-Zhi Zheng; Koji Ito

In this paper, we propose a direct torque control method for the prosthetic hand. In order to estimate the joint torque from EMG signals, an artificial neural network by the feedback error learning schema is used. 2-DOF motions, i.e. hand grasping/opening and arm flexion/extension, are picked up. In the experiments, two measurement conditions of EMG signal are prepared: the forearm from which the EMG signal is measured is free or fixed. Then it is verified that the neural network can learn the relation between the EMG signal and the joint torque under these two measurement conditions.


international symposium on neural networks | 1999

Gauss-sigmoid neural network

Katsunari Shibata; Koji Ito

RBF (radial basis function)-based networks have been widely used because they can learn a strong nonlinear function fast and easily due to their local learning characteristics. Among them, Gaussian soft-max networks have generalization ability better than regular RBF networks because of their extrapolation ability. However, since the RBF-based network has no hidden unit which can represent some global information, the internal representation cannot be obtained. Accordingly even if the knowledge which could be obtained through the previous sets of learning is utilized effectively in the present learning, the network has to learn from scratch. Multi-layered neural networks are able to form the internal representation in the hidden layer through learning. The paper proposes a Gauss-sigmoid neural network for learning with continuous input signals. The input signals are put into a RBF network, and then the outputs of the RBF network are put into a sigmoid-based multi-layered neural network. After learning based on backpropagation, the localized signals from the RBF network are integrated and an appropriate space for the given learning is reconstructed in the hidden layer of the sigmoid-based neural network. Once the hidden space is constructed, both the advantage of the local learning and the global generalization ability can exist together.


international conference on robotics and automation | 2000

A learning and dynamic pattern generating architecture for skilful robotic baseball batting system

Xin-Zhi Zheng; Wataru Inamura; Katsunari Shibata; Koji Ito

A learning and dynamic pattern generating system for acquiring the skills in dynamic manipulation of objects using robotic manipulators is to be established, where the desired space trajectories for the manipulators are not specified explicitly. Robotic batting is taken as a task example. The problem is approached so as to result in an iterative learning of the joint driving torque patterns of the manipulator that are considered as the task skills and learned against several typically given desired ball velocities. A multi-layered artificial neural network is used to learn and generalize the joint driving torque against various desired ball velocities, and an iterative optimal control algorithm is adopted to generate the supervisory joint driving torque signals for the neural network. Computer simulation examples of a three-degree-of-freedom manipulator are outlined, the results are depicted to explain the idea and verify the proposed approach, and the robustness issues are discussed qualitatively.


international symposium on neural networks | 1993

Development of a high-performance general purpose neuro-computer composed of 512 digital neurons

Yuji Sato; Katsunari Shibata; Mitsuo Asai; Masaru Ohki; M. Sugie; Takahiro Sakaguchi; Masashi Hashimoto; Yoshihiro Kuwabara

A high-performance, general-purpose neuro-computer composed of 512 digital neurons is developed. Each neuron has an execution unit which is optimized for traditional neural functions, but the use of a micro-programming architecture makes it general enough to implement any neural function. Horizontal micro-instruction formats and massively parallel-pipelined computation allows high-speed on-chip learning. The theoretical maximum learning speed for the backpropagation algorithm is 1.25 GCUPS (giga connection updates per second). Eight digital neurons are integrated on each neuron chip by using 1.0-/spl mu/m CMOS technology, and 64 neuron chips are packaged in this hardware. This hardware can be connected to a host workstation by a SCSI network. We applied this neuro-computer to handwritten numerals recognition. The learning speed by using the neuro-computer is over 1000 times faster than by using the workstation.


Journal of Robotics | 2010

Emergence of Prediction by Reinforcement Learning Using a Recurrent Neural Network

Kenta Goto; Katsunari Shibata

To develop a robot that behaves flexibly in the real world, it is essential that it learns various necessary functions autonomously without receiving significant information from a human in advance. Among such functions, this paper focuses on learning “prediction” that is attracting attention recently from the viewpoint of autonomous learning. The authors point out that it is important to acquire through learning not only the way of predicting future information, but also the purposive extraction of prediction target from sensor signals. It is suggested that through reinforcement learning using a recurrent neural network, both emerge purposively and simultaneously without testing individually whether or not each piece of information is predictable. In a task where an agent gets a reward when it catches a moving object that can possibly become invisible, it was observed that the agent learned to detect the necessary factors of the object velocity before it disappeared, to relay the information among some hidden neurons, and finally to catch the object at an appropriate position and timing, considering the effects of bounces off a wall after the object became invisible.

Collaboration


Dive into the Katsunari Shibata's collaboration.

Top Co-Authors

Avatar

Koji Ito

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin-Zhi Zheng

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge