Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yutaka Arima is active.

Publication


Featured researches published by Yutaka Arima.


international solid-state circuits conference | 1991

A 336-neuron, 28 K-synapse, self-learning neural network chip with branch-neuron-unit architecture

Yutaka Arima; Koichiro Mashiko; Keisuke Okada; Tsuyoshi Yamada; Atsushi Maeda; Hiromi Notani; Harufusa Kondoh; S. Kayano

A self-learning neural network chip based on the branch-neuron-unit (BNU) architecture, which expands the scale of a neural network by interconnecting multiple chips without reducing performance, is described. The chip integrates 336 neurons and 28224 synapses with a 1.0- mu m double-poly-Si double-metal CMOS technology. The operation speed is higher than 1*10/sup 12/ connections per second per chip. It is estimated that the network scale can be expanded to several hundred chips. In the case of 200-chip interconnections, the network will consist of 3360 neurons and 5,644,800 synapses. >


IEEE Journal of Solid-state Circuits | 1991

A self-learning neural network chip with 125 neurons and 10 K self-organization synapses

Yutaka Arima; Koichiro Mashiko; Keisuke Okada; Tsuyoshi Yamada; Atsushi Maeda; Harufusa Kondoh; Shimpei Kayano

The authors propose a neural network chip that can organize the connection weight of each synapse with 125 neurons so that it can learn on chip. The chip employs the mixed design architecture of digital and analog circuits in a 1.0-μm CMOS technology. The chip operates more than 1000 times faster than conventional computers


international solid-state circuits conference | 1994

A 1.2GFLOPS neural network chip exhibiting fast convergence

Yoshikazu Kondo; Yuichi Koshiba; Yutaka Arima; Mitsuhiro Murasaki; Tuyoshi Yamada; Hiroyuki Amishiro; Hirofumi Shinohara; Hakuro Mori

This digital neural network chip for use as core in neural network accelerators employs a single-instruction multi-data-stream (SIMD) architecture and includes twelve 24 b floating-point processing units (PUs), a nonlinear function unit (NFU), and a control unit (CU). Each PU includes 24 b/spl times/1.28 kw local memory and communicates with its neighbor through a shift register ring. This configuration permits both feed-forward and error back propagation (BP) processes to be executed efficiently. The CU, which includes a three stage pipelined sequencer, a 24 b/spl times/1 kw instruction code memory (ICM) and a 144 b/spl times/256 w microcode memory (MCM), broadcasts network parameters (e.g. learning coefficients or temperature parameters) or addresses for local memories through a data and an address bus. Two external memory ports and a ring expansion-port permit large networks to be constructed. The external memory can be expanded by up to 768 kW using the two ports.<<ETX>>


IEEE Journal of Solid-state Circuits | 1996

A 1.2 GFLOPS neural network chip for high-speed neural network servers

Yoshikazu Kondo; Yuichi Koshiba; Yutaka Arima; Mitsuhiro Murasaki; Tuyoshi Yamada; Hiroyuki Amishiro; Hakuro Mori; Kazuo Kyuma

This paper describes a digital neural network chip for high-speed neural network servers. The chip employs single-instruction multiple-data stream (SIMD) architecture consisting of 12 floating-point processing units, a control unit, and a nonlinear function unit. At a 50 MHz clock frequency, the chip achieves a peak speed performance of 1.2 GFLOPS using 24-bit floating-point representation. Two schemes of expanding the network size enable neural tasks requiring over 1 million synapses to be executed. The average speed performances of typical neural network models are also discussed.


IEEE Journal of Solid-state Circuits | 1989

Design for reducing alpha-particle-induced soft errors in ECL logic circuitry

Masatomi Okabe; M. Tatsuki; Yutaka Arima; T. Hirao; Y. Kuramitsu

Describes the alpha-particle-induced soft errors in ECL logic circuitry. Soft-error hot testing has been performed using 2.5-, 1.7-, and 1.3- mu m ECL gate arrays exposed to /sup 241/Am. Experimental results reveal that: (1) the soft errors in ECL logic circuitry are caused by two types of upsets, latch toggling and gate upsets in clock lines; (2) the soft error rate (SER) increases rapidly as feature size is scaled down; and (3) the SER increases rapidly as switching current is reduced. To overcome the degradation of soft-error tolerance, a soft-error hardened (SEH) circuit design technique employing a new driver configuration which prevents upsets of critical nodes from propagating to following gates is introduced. The SEH design has been implemented in the ECL gate arrays and verified to improve the SER by more than 10/sup 2/ times as compared with conventional circuits. >


international symposium on neural networks | 1993

A 20 Tera-CPS analog neural network board

Mitsuhiro Murasaki; Yutaka Arima; Hirofumi Shinohara

Demonstrates an analog neural network board which integrates 18 interconnected neural network chips. The board realizes a fully feedback connection network consisting of 1,008 neurons and 1,016,064 synapses. The speed performance of the neural network is 20/spl times/10/sup 12/ connections per second (CPS) and 500/spl times/10/sup 9/ connections update per second (CUPS).


Archive | 1992

Neural network integrated circuit device having self-organizing function

Yutaka Arima; Ichiro Tomioka; Toshiaki Hanibuchi


Archive | 1994

Numerical arithmetic processing unit

Yoshikazu Kondo; Yutaka Arima


Archive | 1992

Neural network expressing apparatus including refresh of stored synapse load value information

Yutaka Arima


Archive | 2000

Semiconductor image pickup device

Yasuyuki Endo; Yutaka Arima; Hiroki Ui

Collaboration


Dive into the Yutaka Arima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge