Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takeo Azuma is active.

Publication


Featured researches published by Takeo Azuma.


international solid-state circuits conference | 2010

A 2.2/3-inch 4K2K CMOS image sensor based on dual resolution and exposure technique

Takeo Azuma; Taro Imagawa; Sanzo Ugawa; Yusuke Okada; Hiroyoshi Komobuchi; Motonori Ishii; Shigetaka Kasuga; Yoshihisa Kato

The recent trend in ultra-high-density cameras is running from HD to 4K2K, which will further extend to 8K4K / portable 4K2K. With advancements in device fabrication process technologies, there has been a pressing need for the miniaturization as well as high resolution and high sensitivity in image sensors [1].


asian conference on computer vision | 2010

Image reconstruction for high-sensitivity imaging by using combined long/short exposure type single-chip image sensor

Sanzo Ugawa; Takeo Azuma; Taro Imagawa; Yusuke Okada

We propose a image reconstruction method and a sensor for high-sensitivity imaging using long-term exposed green pixels over several frames. As a result of extending the exposure time of green pixels, motion blur increases. We use motion information detected from highframe-rate red and blue pixels to remove the motion blur. To implement this method, both long- and short-term exposed pixels are arranged in a checkerboard pattern on a single-chip image sensor. Using the proposed method, we improved fourfold the sensitivity of the green pixels without any motion blur.


Proceedings of SPIE | 2012

Real-time computational camera system for high-sensitivity imaging by using combined long/short exposure

Satoshi Sato; Yusuke Okada; Takeo Azuma

In this study, we realize high-resolution (4K-format), small-size (1.43 x 1.43 μm pixel pitch size with a single imager) and high-sensitivity (four times higher sensitivity as compared to conventional imagers) video camera system. Our proposed system is the real time computational camera system that combined long exposure green pixels with short exposure red / blue pixels. We demonstrate that our proposed camera system is effective even in conditions of low illumination.


digital image computing: techniques and applications | 2011

Real Time High-Sensitivity Imaging for Home Surveillance System by Using Combined Long/Short Exposure

Satoshi Sato; Yusuke Okada; Takeo Azuma

In the future, a comprehensive home surveillance system could be constructed using a network of pre-existing cameras found in household products. To be incorporated into household products, the camera must be small. In addition, for surveillance cameras, resolution and sensitivity are important. There is a trade-off between these three requirements, small-size, high-resolution and high-sensitivity. In this study, we realize the real time camera system that satisfies all the three requirements for home surveillance system. This proposed camera system is 4K-format with four times higher sensitivity as compared to conventional imagers. And, the pixel pitch size of imager of this proposed system is 1.43um. Then we propose a speedup algorithm to achieve 24fps operation using 62 FPGAs.


virtual systems and multimedia | 2010

Performance evaluation of high sensitive DRE camera for cultural heritage in subdued light conditions

Sanzo Ugawa; Takeo Azuma; Taro Imagawa; Yusuke Okada

We proposed a color video generation method for spatio-temporal high resolution video imaging in dark conditions.[1] The method (dual resolutions and exposures(DRE) method) consists of a high sensitive imaging with employing long time exposure and a subsequent spatio-temporal decomposition process which suppresses a motion blur caused by the long time exposure. Imaging step captures RGB color video sequences with different spatio-temporal resolution sets to increase light amount. Processing step reconstructs a high spatio-temporal resolution color video from those input video sequences using the regularization framework. The performance of DRE is regard to the spectral distribution of a subject was evaluated in this study. Firstly, the spectral distribution of a green subject (which is though to be unfavorable from the viewpoint of imaging with DRE) was measured. Secondly, this subject was captured on video with DRE, the peak signal-to-noise ratio (PSNR) of the video images was evaluated. Experimental results showed that the DRE method is effective in regard to green subjects. As for subjects that are commonly seen, their spectral distribution showed a broad shape, and the reconstructed images with DRE had high PSNRs. Moreover, even for the subject with a peaky spectral distribution (such as an LED), PSNR value was high. This is because the spectral characteristic of the color filter used with DRE has a wide crosstalk region across the colors


international conference on consumer electronics | 2009

Video generation for high spatio-temporal resolution imaging

Taro Imagawa; Takeo Azuma; Kunio Nobori; Hideto Motomura

We propose a color video generation method for spatio-temporal high-resolution video imaging in dark conditions. The proposed method consists of two steps. First, RGB-separated video sequences with different spatio-temporal resolution sets are captured to increase the amount of captured light. Second, a high spatio-temporal resolution color video is reconstructed from those input video sequences in the regularization framework. We show the advantages of our method using a prototype camera system and simulation results.


electronic imaging | 1999

Real-time active range finder using light intensity modulation

Takeo Azuma; Kenya Uomori; Atsushi Morimura

We propose a new real-time active range finder system, which operates at a video frame rate. Our system consist of a laser line pattern marker, a rotating mirror, a camera with an optical narrow bandpass filter, and a pipe-lined image signal processor. The vertical laser line pattern is horizontally scanned by the video-synchronized rotating mirror. The pattern is modulated with alternating a monotone decreasing intensity function and a monotone increasing function. The bandpass filter selectively transmits the laser light to decrease the disturbance of background light. Depth images are obtained using the intensity ratio of successive video field images, the coordinates of each pixel, and the baseline length. The intensity ratio specifies a line pattern in a plane. Meanwhile, the coordinates of each pixel specify a line that goes through the pixel position on the CCD and the center of the lens. Depth is calculated as an intersection point on the specified plane and line. The image signal processor can perform the above calculation using LUTs within a video frame. In this paper, we evaluate the measurable range, precision, and color properties of our system. Experimental results show that a two meters measurable range is obtained wit a 50 mW laser power, the standard deviations of the depth images with 5 by 5 median filtering are about 1 percent of the measured depth, and the object color has little effect on the measured depth except when the reflectance of the object is very small.


international conference on machine vision | 2015

Compressive sensing reconstruction using collaborative sparsity among color channels

Satoshi Sato; Motonori Ishii; Yoshihisa Kato; Kunio Nobori; Takeo Azuma

This study describes a reconstruction method for compressive sensing using collaborative sparsity among multi-frame images and color channels. The proposed method reduces the artifact for compressive sensing and obtains better image quality. Experimental results reveal that the proposed method reconstructs 6.1 dB higher quality images than the conventional one for complex texture at occlusion boundaries.


international conference on consumer electronics | 2012

Blur-free high-sensitivity imaging system utilizing combined long/short exposure green pixels

Satoshi Sato; Yusuke Okada; Takeo Azuma

In previous work, we have proposed a high-sensitivity imaging system based on the long exposure for all the G(green) pixels, in which the inherent blur was suppressed with using both the regularization method and motion information from blur-free short-term exposed information from R(red) / B(blue) pixels. However this method was ineffective if detected motion information is inaccurate. As a result, an inadequate accuracy of motion information lost image quality and caused blur image. In this study, we propose a new high-sensitivity imaging system that is stable under such a condition. We arrange long- and short-term exposed G pixels for each line alternatively. With combining regularization method and blur-free G pixel information obtained from the short-term exposure, the proposed system does not require motion information. Then reconstruction images of our proposed system include no blur even if accurate motion detection is difficult.


international conference on machine vision | 2017

Compressive color sensing using random complementary color filter array

Satoshi Sato; Nobuhiko Wakai; Kunio Nobori; Takeo Azuma; Takamichi Miyata; Makoto Nakashizuka

We propose a new color imaging system based on a compressive sensing technique. Our system consists of a random complementary color filter array (CFA) for random projection and a color reconstruction method for demosaicing. Our CFA overlaps two complementary color filters and consists of six color filters: cyan (C), yellow (Y), magenta (M), C+Y, C+M, and Y+M. By arranging these six color filters randomly, our imaging system achieves pseudo random projection among red (R) /green (G) / blue (B) colors, which is the key technology of compressive sensing. Because this CFA can retain more color information than RGB CFA, the proposed color reconstruction method reduces artifacts at monochromatic edges and in high-frequency regions, and obtains better image quality. As an additional contribution, we introduce saturation consistency to suppress color artifacts in saturated areas, then achieve to 3.3 dB higher quality images than the conventional method.

Collaboration


Dive into the Takeo Azuma's collaboration.

Researchain Logo
Decentralizing Knowledge