Hiroyuki Amishiro
Mitsubishi
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hiroyuki Amishiro.
international solid-state circuits conference | 1994
Yoshikazu Kondo; Yuichi Koshiba; Yutaka Arima; Mitsuhiro Murasaki; Tuyoshi Yamada; Hiroyuki Amishiro; Hirofumi Shinohara; Hakuro Mori
This digital neural network chip for use as core in neural network accelerators employs a single-instruction multi-data-stream (SIMD) architecture and includes twelve 24 b floating-point processing units (PUs), a nonlinear function unit (NFU), and a control unit (CU). Each PU includes 24 b/spl times/1.28 kw local memory and communicates with its neighbor through a shift register ring. This configuration permits both feed-forward and error back propagation (BP) processes to be executed efficiently. The CU, which includes a three stage pipelined sequencer, a 24 b/spl times/1 kw instruction code memory (ICM) and a 144 b/spl times/256 w microcode memory (MCM), broadcasts network parameters (e.g. learning coefficients or temperature parameters) or addresses for local memories through a data and an address bus. Two external memory ports and a ring expansion-port permit large networks to be constructed. The external memory can be expanded by up to 768 kW using the two ports.<<ETX>>
IEEE Journal of Solid-state Circuits | 1996
Yoshikazu Kondo; Yuichi Koshiba; Yutaka Arima; Mitsuhiro Murasaki; Tuyoshi Yamada; Hiroyuki Amishiro; Hakuro Mori; Kazuo Kyuma
This paper describes a digital neural network chip for high-speed neural network servers. The chip employs single-instruction multiple-data stream (SIMD) architecture consisting of 12 floating-point processing units, a control unit, and a nonlinear function unit. At a 50 MHz clock frequency, the chip achieves a peak speed performance of 1.2 GFLOPS using 24-bit floating-point representation. Two schemes of expanding the network size enable neural tasks requiring over 1 million synapses to be executed. The average speed performances of typical neural network models are also discussed.
Archive | 1999
Hiroyuki Amishiro; Motoshige Igarashi
Archive | 1997
Motoshige Igarashi; Hiroyuki Amishiro; Keiichi Higashitani
Archive | 1991
Hirofumi Shinohara; Hiroyuki Amishiro
Archive | 2003
Hiroyuki Amishiro; Motoshige Igarashi
Archive | 2003
Hiroyuki Amishiro; Kenji Yamaguchi
Archive | 2002
Kenji Yamaguchi; Hiroyuki Amishiro; Motoshige Igarashi
Archive | 2000
Motoshige Igarashi; Hiroyuki Amishiro; Keiichi Higashitani
Archive | 2002
Hideyo Haruhana; Hiroyuki Amishiro; Akihiko Harada