Hirotsugu Okuno
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hirotsugu Okuno.
IEEE Transactions on Biomedical Circuits and Systems | 2012
Hirotsugu Okuno; Tetsuya Yagi
We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.
intelligent robots and systems | 2007
Hirotsugu Okuno; Tetsuya Yagi
A real-time vision sensor for collision avoidance was designed. To respond selectively to approaching objects on direct collision course, the sensor employs an algorithm inspired by the visual nervous system in a locust, which can avoid a collision robustly by using visual information. We implemented the architecture of the locust nervous system with a compact hardware system which contains mixed analog- digital integrated circuits consisting of an analog resistive network and field-programmable gate array (FPGA) circuits. The response properties of the system were examined by using simulated movie images, and the system was tested also in real- world situations by loading it on a motorized car. The system was confirmed to respond selectively to colliding objects even in complicated real-world situations.
IEEE Transactions on Biomedical Circuits and Systems | 2015
Hirotsugu Okuno; Jun Hasegawa; Tadashi Sanada; Tetsuya Yagi
In most parts of the retina, neuronal circuits process visual signals represented by slowly changing membrane potentials, or so-called graded potentials. A feasible approach to speculate about the functional roles of retinal neuronal circuits is to reproduce the graded potentials of retinal neurons in response to natural scenes. In this study, we developed a simulation platform for reproducing graded potentials with the following features: real-time reproduction of retinal neural activities in response to natural scenes, a configurable model structure, and compact hardware. The spatio-temporal properties of neurons were emulated efficiently by a mixed analog-digital architecture that consisted of analog resistive networks and a field-programmable gate array. The neural activities on sustained and transient pathways were emulated from 128 × 128 inputs at 200 frames per second.
biomedical circuits and systems conference | 2014
Takumi Kawasetsu; Ryoya Ishida; Tadashi Sanada; Hirotsugu Okuno
To reveal the functional roles of retinal and cortical neurons, responses of a visual neuronal network under natural visual environments should be investigated. In this study, we developed a real-time visual system emulator for reproducing neural activities in the retina and the visual cortex by combining a hardware retina emulator developed in the previous study and SpiNNaker chips. An interface board was designed and fabricated to provide retinal spikes simulated by the retina emulator to SpiNNaker chips. Taking advantages of multiple parallel processing techniques, the emulator generates simulated spikes with 1 ms precision. Neural responses simulated by the emulator was examined by presenting natural scenes.
international conference on neural information processing | 2010
Tamas Fehervari; Masaru Matsuoka; Hirotsugu Okuno; Tetsuya Yagi
Applying electrical stimulation to the visual cortex has been shown to produce dot-like visual perceptions called phosphenes. Artificial prosthetic vision is based on the concept that patterns of phosphenes can be used to convey visual information to blind patients. We designed a system that performs real-time simulation of phosphene perceptions evoked by cortical electrical stimulation. Phosphenes are displayed as Gaussian circular and ellipsoid spots on a randomised grid based on existing neurophysiological models of cortical retinotopy and magnification factor. The system consists of a silicon retina camera (analogue integrated vision sensor), desktop computer and headmounted display.
Robotics and Autonomous Systems | 2009
Hirotsugu Okuno; Tetsuya Yagi
Bio-inspired vision system is a particularly good candidate for navigation of mobile robots and vehicles because of its computational advantages, e.g., low power dissipation and compact hardware. Previously, we had designed a mixed analog-digital integrated vision system for collision detection inspired by a locust neuronal circuit model. The response of the system was, however, susceptible to the luminance of approaching objects and the vibratory self-motion of a car when it was installed on a miniature mobile car. In the present study, we developed a new collision detection algorithm to overcome these problems based on robust image-motion detection and applied the algorithm to control a miniature mobile car.
Neural Networks | 2008
Hirotsugu Okuno; Tetsuya Yagi
We have designed a visually guided collision warning system with a neuromorphic architecture, employing an algorithm inspired by the visual nervous system of locusts. The system was implemented with mixed analog-digital integrated circuits consisting of an analog resistive network and field-programmable gate array (FPGA) circuits. The resistive network processes the interaction between the laterally spreading excitatory and inhibitory signals instantaneously, which is essential for real-time computation of collision avoidance with a low power consumption and a compact hardware. The system responded selectively to approaching objects of simulated movie images at close range. The system was, however, confronted with serious noise problems due to the vibratory ego-motion, when it was installed in a mobile miniature car. To overcome this problem, we developed the algorithm, which is also installable in FPGA circuits, in order for the system to respond robustly during the ego-motion.
international conference on neural information processing | 2011
Bumhwi Kim; Hirotsugu Okuno; Tetsuya Yagi; Minho Lee
This paper proposes a new hardware system for visual selective attention, in which a neuromorphic silicon retina chip is used as an input camera and a bottom-up saliency map model is implemented by a Field-Programmable Gate Array (FPGA) device. The proposed system mimics the roles of retina cells, V1 cells, and parts of lateral inferior parietal lobe (LIP), such as edge extraction, orientation, and selective attention response, respectively. The center surround difference and normalization for mimicking the roles of on-center and off-surround function in the lateral geniculate nucleus (LGN) are implemented by the FPGA. The integrated artificial retina chip with the FPGA successfully produces the human-like visual attention function, with small computational overhead. In order to apply this system to mobile robotic vision, the proposed system aims to low power dissipation and compactness. The experimental results show that the proposed system successfully generates the saliency information from natural scene.
international conference on neural information processing | 2008
Hirotsugu Okuno; Tetsuya Yagi
Locusts have a remarkable ability of visual guidance that includes collision avoidance exploiting the limited nervous networks in their small cephalon. We have designed and tested a real-time intelligent visual system for collision avoidance inspired by the visual nervous system of a locust. The system was implemented with mixed analog-digital integrated circuits consisting of an analog resistive network and field-programmable gate array (FPGA) circuits so as to take advantage of the real-time analog computation and programmable digital processing. The response properties of the system were examined by using simulated movie images, and the system was tested also in real-world situations by loading it on a motorized miniature car. The system was confirmed to respond selectively to colliding objects even in complex real-world situations.
Neural Networks | 2016
Shinsuke Yasukawa; Hirotsugu Okuno; Kazuo Ishii; Tetsuya Yagi
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video.