Taro Imagawa
Panasonic
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Taro Imagawa.
international solid-state circuits conference | 2010
Takeo Azuma; Taro Imagawa; Sanzo Ugawa; Yusuke Okada; Hiroyoshi Komobuchi; Motonori Ishii; Shigetaka Kasuga; Yoshihisa Kato
The recent trend in ultra-high-density cameras is running from HD to 4K2K, which will further extend to 8K4K / portable 4K2K. With advancements in device fabrication process technologies, there has been a pressing need for the miniaturization as well as high resolution and high sensitivity in image sensors [1].
international symposium on neural networks | 1993
Susumu Maruno; Taro Imagawa; Toshiyuki Kohda; Yoshihiro Kojima; Hiroshi Yamamoto; Yasuharu Shimeki
The authors have previously proposed a multi functional layered network (MFLN) employing a quantizer neuron model and proved that a learning speed of MFLN is the fastest among RCE networks, LVQ3 and multi-layered neural network with backpropagation. The authors also proved that MFLN has very nice supplemental learning performance and can realize adaptive learning or filtering. One of the biggest issues of neural networks is how to design the network structure. In this paper the authors propose an adaptive segmentation of quantizer neuron architecture (ASQA) for answering the above issue and apply them to handwritten character recognition. The networks based on ASQA consist of quantizer neurons which can proliferate themselves and form the optimum network structure for the recognition automatically during training. As a result, there is no need to design the structure of the networks and the average accuracy of the closed and the open test of 27,200 handwritten numeric characters increased to 99.6%. The best tuning of a segmentation threshold of quantizer neurons produced the optimum network size of ASQA.
asian conference on computer vision | 2010
Sanzo Ugawa; Takeo Azuma; Taro Imagawa; Yusuke Okada
We propose a image reconstruction method and a sensor for high-sensitivity imaging using long-term exposed green pixels over several frames. As a result of extending the exposure time of green pixels, motion blur increases. We use motion information detected from highframe-rate red and blue pixels to remove the motion blur. To implement this method, both long- and short-term exposed pixels are arranged in a checkerboard pattern on a single-chip image sensor. Using the proposed method, we improved fourfold the sensitivity of the green pixels without any motion blur.
virtual systems and multimedia | 2010
Sanzo Ugawa; Takeo Azuma; Taro Imagawa; Yusuke Okada
We proposed a color video generation method for spatio-temporal high resolution video imaging in dark conditions.[1] The method (dual resolutions and exposures(DRE) method) consists of a high sensitive imaging with employing long time exposure and a subsequent spatio-temporal decomposition process which suppresses a motion blur caused by the long time exposure. Imaging step captures RGB color video sequences with different spatio-temporal resolution sets to increase light amount. Processing step reconstructs a high spatio-temporal resolution color video from those input video sequences using the regularization framework. The performance of DRE is regard to the spectral distribution of a subject was evaluated in this study. Firstly, the spectral distribution of a green subject (which is though to be unfavorable from the viewpoint of imaging with DRE) was measured. Secondly, this subject was captured on video with DRE, the peak signal-to-noise ratio (PSNR) of the video images was evaluated. Experimental results showed that the DRE method is effective in regard to green subjects. As for subjects that are commonly seen, their spectral distribution showed a broad shape, and the reconstructed images with DRE had high PSNRs. Moreover, even for the subject with a peaky spectral distribution (such as an LED), PSNR value was high. This is because the spectral characteristic of the color filter used with DRE has a wide crosstalk region across the colors
international conference on consumer electronics | 2009
Taro Imagawa; Takeo Azuma; Kunio Nobori; Hideto Motomura
We propose a color video generation method for spatio-temporal high-resolution video imaging in dark conditions. The proposed method consists of two steps. First, RGB-separated video sequences with different spatio-temporal resolution sets are captured to increase the amount of captured light. Second, a high spatio-temporal resolution color video is reconstructed from those input video sequences in the regularization framework. We show the advantages of our method using a prototype camera system and simulation results.
international symposium on neural networks | 1993
Susumu Maruno; Taro Imagawa; Toshiyuki Kohda; Yasuharu Shimeki
One of the biggest issues of an object recognition is the recognition with rotation invariance under a fluctuating noisy environment. We developed an object recognition system using temporal pattern recognition network with quantizer neuron chip (QNC) and a /spl phi/-s transformation of shapes and applied them to object recognition. The shape of the object is converted to a series of angles as a function of the circumference of the shape (/spl phi/-s data) and can be treated as a series of temporal patterns. The system consists of a multifunctional layered network(MFLN) with QNC and a layer of neurons with self feedback (self feedback layer). The self feedback layer unifies the temporal recognition results of networks with QNC during a certain period defined by the time constant of self feedback and this function can realize the function of selective attention to certain areas of a series of temporal patterns. As a result, the system realizes rotation invariance in recognition and we obtained 100% recognition accuracy of 50 trials with fluctuating noise taken by CCD camera.
Archive | 2007
Taro Imagawa; Takeo Azuma
Archive | 2001
Taro Imagawa; Tsuyoshi Mekata
Archive | 2003
Taro Imagawa; Masamichi Nakagawa; Takeo Azuma; Shusaku Okamoto
Archive | 1993
Taro Imagawa; Toshiyuki Kouda; Yoshihiro Kojima; Susumu Maruno; Yasuharu Shimeki