Ryutaro Oi
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ryutaro Oi.
IEEE\/OSA Journal of Display Technology | 2011
Takanori Senoh; Tomoyuki Mishina; Kenji Yamamoto; Ryutaro Oi; Taiichiro Kurita
This paper describes a viewing-zone-angle- expanded color electronic holography system using ultra-high-definition liquid crystal displays (LCDs) with undesirable light elimination. The authors first investigate methods eliminating undesirable light, color aberration, and astigmatism. They then investigate spatio-temporal multiplexing methods to reduce system complexity. These investigations enable an electronic holography system to be constructed using 33 Mpixel LCDs. Experimental results show high-quality full-color 3D images with a diagonal size of 4 cm, viewing-zone-angle of 15 deg, and frame rate of 20 fps.
Scientific Reports | 2015
Hisayuki Sasaki; Kenji Yamamoto; Koki Wakunami; Yasuyuki Ichihashi; Ryutaro Oi; Takanori Senoh
In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.
international conference on image processing | 2003
Ryutaro Oi; Kiyoharu Aizawa
In this paper, we propose the wide dynamic range imaging system by using the sensitivity adjustable CMOS image sensor. This method is effective to capture a scene that has both the very bright area and the dark area at the same time. Proposed sensor has the adjustable sensitivity pixels and the sensitivity is changed by switching the extra capacitor in each pixel circuit. We designed and implemented the 200 /spl times/ 200 pixels of the prototype VLSI chip. Verification result shows the sensitivity changes 6 dB of range. We also made the wide dynamic range image capture system with this prototype and observed the appropriate output of the image sensor.
ieee sensors | 2002
Ryutaro Oi; Takayuki Hamamoto; Kiyoharu Aizawa
In the field of computer graphics, a new method called Image-Based Rendering (IBR) has been researched. With this method, 3D arbitrary-view images can be synthesized from array of input images. However, most of the current research is concentrated on static scenes because of the difficulty of handling a large amount of multiple video inputs. The authors adopt a random access image sensor for the camera array. In our system, each sensor outputs only the pixels that are externally selected, so that the total readout data from the camera array is reduced to that necessary and sufficient for the rendering. We designed and implemented the pixel based random access image sensor with 0.8 /spl mu/m CMOS technology. We implemented a camera array system with sixteen of the prototype devices. In this paper, we present our design of the random access CMOS image sensors and the construction of the camera array system. Evaluation of the prototype sensor and the results of the arbitrary-view synthesis are also described.
Spie Newsroom | 2014
Yasuyuki Ichihashi; Ryutaro Oi; Takanori Senoh; Hisayuki Sasaki; Koki Wakunami; Kenji Yamamoto
Electronic holography—generating holograms using electrooptical apparatus—could enable the ultimate interactive 3D display system.1 The technique shows promise for 3D television, virtual workspaces for telework, and teleconferences, all of which require the construction of a moving 3D image without the need for 3D glasses. However, to realize a real-time electronic holography system requires processing a large amount of captured 3D data to generate the holograms. To resolve this issue we used integral photography (IP)—a 3D imaging technique— instead of the conventional depth camera (a system that captures a color image using a digital camera and a depth ‘map’ with IR light) and a graphics processing unit to capture 3D objects for parallel processing.2 In addition, we adapted the IP’s optical setup to use a fast Fourier transform algorithm for real-time hologram calculation. Figure 1 shows a schematic overview of our system for realtime image capturing and reconstruction. The setup consists of three component blocks: capture, calculation, and display, which are shown in Figures 2–4.2 The first block captures a 3D image with IP, using a system that comprises a lens array, a field lens, a spatial filter, and a 4K (386
Spie Newsroom | 2011
Kenji Yamamoto; Yasuyuki Ichihashi; Takanori Senoh; Ryutaro Oi; Taiichiro Kurita
Holography is a technology that reconstructs light as if the imaged object were still present. It has long attracted attention for use in 3D displays. However, electronic holography has not yet met its full potential, and requires further development for use in communications. While the capacity to show moving objects is an attractive feature of electronic holography,1–5 it nevertheless possesses some challenges. These include removing interrupting light, enlarging the viewing angle (which originates from the large pixel pitch of electronic devices used to display hologram data), and hologram data acquisition in natural light. We have developed electronic holography system prototypes to address these issues.6 We discuss one of them here: an electronic holography system with a camera array. Our prototype system uses a camera array, which we designed and manufactured, to capture ray-information from 3D real objects. Figure 1 (a), (b), and (c) depict the camera array, the camera-in-camera array, and the electronic holography system, respectively. The array consists of 25 cameras that are aligned on a circle of radius 1500mm at intervals of 1.2. The interval between cameras is approximately 32mm, which results in a very densely arranged array. All of the cameras face the center of the circle, where a person will be located. Dense, high-quality ray information, which we term superdense ray-information, is necessary to showcase the inherent capacities of electronic holography. The camera array requires view-interpolation signal processing to synthesize super-dense ray information from captured images. Much signal-processing research has been conducted in the research fields of computer Figure 1. Prototype electronic holography system with a camera array.
international conference on multisensor fusion and integration for intelligent systems | 2003
Ryutaro Oi; Takayuki Hamamoto; Kiyoharu Aizawa
Image-based rendering (IBR) is a powerful tool for synthesizing arbitrary-views of a real scene. So far, almost all the IBR researches are limited to static scenes because of the difficulty in simultaneously capturing a dynamic scene by a number of cameras. Although IBR needs many images of the real scene from different view positions, only a small number of pixels per each image are used for the synthesis. We propose a capturing system for real-time IBR for dynamic scenes, which uses special random access image sensors. In the proposed system, image sensors selectively outputs pixels that are required for synthesis and the selection of the pixels are dynamically changed according to the position of the virtual view. We designed and prototyped CMOS random access image sensors for the real-time IBR. By using sixteen of those sensors, we build a random access IBR camera array system. In this paper, we describe the design and the experimental results of the camera array system.
world automation congress | 2002
Ryutaro Oi; Takayuki Hamamoto; Kiyoharu Aizawa
We investigate a real-time image-based rendering (IBR) system. IBR enables one to synthesize a 3D arbitrary view from a 2D array of input images. However, conventional systems have a difficulty to handle dynamic scenes because of the heaviness of the input. We design an IBR system with random access image sensors. In the system, each image sensor outputs only the pixels that are externally selected, so that the total readout data from the camera array is reduced to the necessary and sufficient condition for the rendering. We develop a prototype of random access image sensor by the CMOS process. A prototype of IBR system with 16 of this sensors was constructed and synthesized arbitrary-view at 60 frames/second.
Archive | 2007
Tomoyuki Mishina; Ryutaro Oi; Masato Okui; Takanori Senoo; Kenji Yamamoto; 智之 三科; 隆太朗 大井; 誠人 奥井; 孝憲 妹尾; 健詞 山本
Archive | 2007
Atsushi Arai; Hiroshi Kawai; Tomoyuki Mishina; Yuji Nojiri; Ryutaro Oi; Masato Okui; Fumio Okuno; 智之 三科; 隆太朗 大井; 誠人 奥井; 文男 奥野; 博史 川井; 淳 洗井; 裕司 野尻
Collaboration
Dive into the Ryutaro Oi's collaboration.
National Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputs