Tomoya Kameda
Nara Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tomoya Kameda.
Neurocomputing | 2017
Mutsumi Kimura; Ryohei Morita; Sumio Sugisaki; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
We have developed a cellular neural network formed by simplified processing elements composed of thin-film transistors. First, we simplified the neuron circuit into a two-inverter two-switch circuit and the synapse device into only a transistor. Next, we composed the processing elements of thin-film transistors, which are promising for giant microelectronics applications, and formed a cellular neural network by the processing elements. Finally, we confirmed that the cellular neural network can learn multiple logics even in a small-scale neural network. Moreover, we verified that the cellular neural network can simultaneously recognize multiple simple alphabet letters. These results should serve as the theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.
international conference on neural information processing | 2016
Mutsumi Kimura; Nao Nakamura; Tomoharu Yokoyama; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
Simplification of processing elements is greatly desired in cellular neural networks to realize ultra-large scale integration. First, we propose reducing a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we propose reducing a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct workings of the cellular neural networks using circuit simulation. These results will be one of the theoretical bases to apply cellular neural networks to brain-type integrated circuits.
international conference on neural information processing | 2016
Tomoya Kameda; Mutsumi Kimura; Yasuhiko Nakashima
Recently, neural networks have been developed for variable purposes including image and voice recognitions. However, those based on only software implementation require huge amount of calculation and energy. Therefore, we are now designing a hardware with cellular neural network CNN that features low power, high-density, and high-functionality. In this study, we developed a CNN simulator for evaluating some letter reproduction algorithm. In this simulator, each of the neurons is just connected to neighboring neurons with surrounding synapses. Learning process is executed by modifying the strength of each connection. Particularly, we assumed to employ a-IGZO films for crosspoint-type synapses that utilize a phenomenon that the conductance changes when an electric current flows. We modeled this phenomenon and implemented it into the simulator to determine the network architecture and device parameters. In this paper, the structure, allocation method of a-IGZO and the algorithm are described. Finally, we confirmed that our cellular neural network can learn two letters. Furthermore, it was found that the estimated time for learning is around 100i¾?h based on the current characteristic change model of a-IGZO film, and some conditions to enhance the deterioration speed of a-IGZO film should be explored.
IEEE Journal of the Electron Devices Society | 2018
Mutsumi Kimura; Yuki Koga; Hiroki Nakanishi; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
We have succeeded in utilizing In–Ga–Zn–O (IGZO) thin-film devices as synapse elements in a neural network. The electrical conductance is regarded as the connection strength, and the continuous change by flowing electrical current is employed as the connection plasticity based on the modified Hebbian learning as a learning rule. We developed a cellular neural network using the IGZO thin-film devices and confirmed that the neural network can learn simple logic functions. These results suggest a possibility to realize 3-D layered structure for brain-type integrated systems.
international conference on neural information processing | 2017
Tomoya Kameda; Mutsumi Kimura; Yasuhiko Nakashima
Neuromorphic hardware using simplified elements and thin-film semiconductor devices as synapse elements is proposed. It is assumed that amorphous metal-oxide semiconductor devices are used for the synapse elements, and the characteristic degradation is utilized for the learning rule named modified Hebbian learning. First, we explain an architecture and operation of a Hopfield neural network. Next, we model the electrical characteristic of the thin-film semiconductor devices and simulate the letter recognition by the neural network. Particularly in this presentation, we show a degradation map. On the other hand, we also explain an architecture and operation of a cellular neural network, model the thin-film semiconductor devices, and simulate the letter recognition. Particularly in this presentation, we evaluate connection schemes. It is found that the cellular neural network has higher performance when it has diagonal connections. Moreover, we compare the Hopfield and cellular neural networks. It is found that the Hopfield neural network has higher performance, although the cellular neural network has a simple structure.
Journal of Electrical & Electronic Systems | 2017
Mutsumi Kimura; Hiroki Nakanishi; Nao Nakamura; Tomoharu Yokoyama; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
We have succeeded in simplification of processing elements in cellular neural network. First, we reduce a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we reduce a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct operation of the cellular neural network by learning of arbitrary logics. These results will be theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.
ECS Transactions | 2017
Mutsumi Kimura; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
international conference on neural information processing | 2016
Mutsumi Kimura; Nao Nakamura; Tomoharu Yokoyama; Tokiyoshi Matsuda; Tomoya Kameda; Yasuhiko Nakashima
international workshop on active matrix flatpanel displays and devices | 2017
Keisuke Ikushima; Kenta Umeda; Tokiyoshi Matsuda; Mutsumi Kimura; Tomoya Kameda; Yasuhiko Nakashima
The Japan Society of Applied Physics | 2017
Keisuke Ikushima; Yuki Koga; Toshimasa Hori; Tokiyoshi Matsuda; Mutsumi Kimura; Tomoya Kameda; Yasuhiko Nakashima