Hirotoshi Eguchi
Ricoh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hirotoshi Eguchi.
international symposium on neural networks | 1991
Hirotoshi Eguchi; Toshiyuki Furuta; Hiroyuki Horiguchi; Sugitaka Oteki; T. Kitaguchi
A model for neural network learning and recall has been developed and implemented in digital LSI. Activation, weight, and error signals are represented by stochastic digital pulse trains. The average pulse frequency is the value of the signal. All mathematical operations are performed in parallel using simple logical operations on the signal pulses. Learning is performed on the chip. A network of these artificial neural networks rapidly learned the solution to a two-dimensional inverted pendulum-balancer control problem. Another such network solved a simple character recognition problem.<<ETX>>
international symposium on neural networks | 1993
Sugitaka Oteki; A. Hashimoto; Toshiyuki Furuta; S. Motomura; T. Watanabe; D.G. Stork; Hirotoshi Eguchi
A digital neural network VLSI chip, RN200 has been developed and fabricated. Sixteen neurons and totally 256 synapses are integrated in a 13.73/spl times/13.73 mm/sup 2/ VLSI chip, fabricated by RICOH 0.8 /spl mu/m CMOS technology. Multiple-layer neural network can be made by combining two or more-chips. Signals within the network (e.g., activations, error signals, connection weights) are represented by stochastic digital pulse trains. Both feed forward and learning processes are efficiently implemented with simple logical gates. Our novel approach for approximating the derivative of activation function is described. The approximation circuit requires only a few gates. Multiple-RNG architecture is adopted to ensure the random distribution of pulses. Both seeds and configurations of the random number generators on the chip can be updated dynamically and randomly by this mechanism. The effectiveness of the derivative and the Multiple-RNG architecture are simulated and verified with the learning performance in a hand-written character recognition problem. The chip can perform 5.12 gigapulse operations per second. It corresponds to effective neural computing rate of 40M CPS or 40M CUPS.
international symposium on neural networks | 1992
Hirotoshi Eguchi; D.G. Stork; G. Wolff
The authors present mathematical results related to recent neural network algorithms employing stochastic pulse encoding. In such algorithms, neural activations and connection weights are encoded as stochastic streams of pulses, where the average density represents the signal or weight value. The authors show the precise form of expected output for two- and three-input neurons, and describe these functions in the limit for a large number of inputs. They address a fundamental limitation inherent in these stochastic techniques: their finite precision. The precision is dependent upon the pulse averaging period-the longer this period (i.e. the larger the number of pulses sampled), the higher the precision. The authors derived exact expressions for the distribution of neural periods as well as a statistical analysis to find the averaging period required for precision of five bits-a resolution determined by others to be necessary for successful implementations of backpropagation. It is found that approximately=1000 pulses are required for 5-b precision. These results reveal fundamental limits in speed and memory requirements for stochastic pulse implementations of neural learning algorithms.<<ETX>>
Archive | 1992
Toshiyuki Furuta; Hiroyuki Horiguchi; Hirotoshi Eguchi; Yutaka Ebi; Tatsuya Furukawa; Yoshio Watanabe; Toshihiro Tsukagoshi
Archive | 1992
Toshiyuki Furuta; Hiroyuki Horiguchi; Hirotoshi Eguchi
Archive | 1993
Toshiyuki Furuta; Takashi Kitaguchi; Hirotoshi Eguchi
Archive | 2004
Kenichi Ogata; Takahiro Yoshida; Hirotoshi Eguchi
Archive | 1991
Shuji Motomura; Toshiyuki Furuta; Hirotoshi Eguchi
Archive | 1992
Toshiyuki Furuta; Takashi Kitaguchi; Hirotoshi Eguchi
Archive | 2002
Hirotoshi Eguchi; Shuzo Matsumoto; Kenichi Ogata; Kiyoshi Yamaguchi; Takayuki Hiyoshi; Mitsuru Shingyohuchi; Masanori Kusunoki