Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuichi Taguchi is active.

Publication


Featured researches published by Yuichi Taguchi.


IEEE Transactions on Visualization and Computer Graphics | 2009

TransCAIP: A Live 3D TV System Using a Camera Array and an Integral Photography Display with Interactive Control of Viewing Parameters

Yuichi Taguchi; Takafumi Koike; Keita Takahashi; Takeshi Naemura

The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008

Real-Time All-in-Focus Video-Based Rendering Using A Network Camera Array

Yuichi Taguchi; Keita Takahashi; Takeshi Naemura

We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a Gigabit Ethernet. To render a high-quality novel view, we estimate a view-dependent per-pixel depth map in real-time by using a layered representation. The rendering algorithm is fully implemented on a GPU, which allows our system to efficiently use CPU and GPU independently and in parallel. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 fps depending on rendering parameters. Experimental results show high-quality images synthesized from various scenes.


Eurasip Journal on Image and Video Processing | 2009

Rendering-oriented decoding for a distributed multiview coding system using a coset code

Yuichi Taguchi; Takeshi Naemura

This paper discusses a system in which multiview images are captured and encoded in a distributed fashion and a viewer synthesizes a novel image from this data. We present an efficient method for such a system that combines decoding and rendering processes in order to directly synthesize the novel image without having to reconstruct all the input images. Our method jointly performs disparity compensation in the decoding process and geometry estimation in the rendering process, because they are essentially equivalent if the camera parameters for the input images are known. Our method keeps both encoder and decoder complexity as low as that of a conventional intracoding method, while attaining better coding performance owing to the interimage decoding. We validate our method by evaluating the coding performance and the processing time for decoding and rendering in experiments.


international conference on computer graphics and interactive techniques | 2009

Automatic colorization of grayscale images using multiple images on the web

Yuji Morimoto; Yuichi Taguchi; Takeshi Naemura

Colorization is the process of adding color to monochrome images and video. It is used to increase the visual appeal of images such as old black and white photos, classic movies, and scientific visualizations. Since colorizing grayscale images involves assigning three-dimensional (RGB) pixel values to an image whose elements are characterized by one feature (luminance) only, the colorization problem does not have a unique solution. Hence, human interaction is typically required in the colorization process. Although existing colorization methods attempt to minimize the amount of user intervention, they require users to manually sellect a similar image to the target image or input a set of color seeds for different regions of the target image. In this paper, we present an entirely automatic colorization method using multiple images collected from the Web. The method generates various and natural colorized images from an input monochrome image by using the information of the scene structure.


international conference on image processing | 2006

View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis

Yuichi Taguchi; Takeshi Naemura

This paper proposes a view-dependent light field coding scheme using some image-based rendering techniques prior to coding. The proposed coder first synthesizes an image at a given viewpoint, which is called a representative viewpoint, and then predicts all input images by using the synthesized image as a reference. It can produce a view-dependent scalable bitstream. This means that the quality of synthesized views around the representative viewpoint is kept high even at extremely low bit rates, and the quality of views away from there is improved according to the increase of the bit rate. Our experimental results show that this coding scheme also achieves good coding efficiency for both multi-camera images and integral photography, which are common light field representations.


international conference on computer graphics and interactive techniques | 2008

TransCAIP: live transmission of light field from a camera array to an integral photography display

Yuichi Taguchi; Takafumi Koike; Keita Takahashi; Takeshi Naemura

TransCAIP provides a real-time 3D visual experience by using an array of 64 cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-colour, full-parallax auto-stereoscopic display with interactive control of viewing parameters.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

View-dependent scalable coding of light fields using ROI-based techniques

Yuichi Taguchi; Keita Takahashi; Takeshi Naemura

This paper proposes a scalable coding scheme for interactive streaming of dynamic light fields, in which a region of interest (ROI) approach is applied for multi-view image sets. In our method, the image segments that are essential for synthesizing the view requested by a remote user are included in an ROI, which is compressed and transmitted with high priority. Since the data for the desired view are transmitted with the data for its neighboring views as the ROI, the user can render high quality novel views around the desired viewpoint before the arrival of the next frame data. Thus our method can compensate the movement of the remote user even if the network has high latency. Since the user can arbitrarily choose the movable range of the viewpoint by changing the size and weight ratio of the ROI, we call this functionality view-dependent scalability. Using a modified JPEG2000 codec, we evaluated the view-dependent scalability of our scheme by measuring the quality of synthesized views against the distance from the originally desired viewpoint.


international conference on computer graphics and interactive techniques | 2005

Free-viewpoint thumbnail for light field compression

Yuichi Taguchi; Takeshi Naemura

This paper proposes a new JPEG-compatible coder of light field data. When viewing a 3D scene using light field data, we usually require a specialized light field viewer. However, the proposed coder produces an output file from which we can see a thumbnail of the light field by using common JPEG viewers. This thumbnail can be a free viewpoint image of the scene which reflects user’s preference or creator’s idea. We can also achieve good compression efficiency by using this thumbnail as a reference image.


international conference on image processing | 2007

Rendering-Oriented Decoding for Distributed Multi-View Coding System

Yuichi Taguchi; Takeshi Naemura

This paper discusses a system in which multi-view images are captured and encoded in a distributed fashion and a viewer synthesizes a novel view from this data. We developed an efficient method for such system that combines decoding and rendering process to directly synthesize the novel image without reconstructing all the input images. Our method jointly performs disparity compensation in decoding process and geometry estimation in rendering process, because they are essentially equivalent if the camera parameters for the input images are known. It achieves low-complexity for both encoder and decoder in distributed multi-view coding system. Experimental results show superior coding performance of our method compared to a conventional intra-coding method especially at low bit rate.


international conference on computer graphics and interactive techniques | 2007

GPU-oriented light field compression for real-time streaming

Toru Ando; Yuichi Taguchi; Takeshi Naemura

Emerging applications such as free-viewpoint video and 3D-TV can enhance our viewing experience by rendering arbitrary viewpoint images using transmitted light field data. Compression is one of the key technologies for such systems due to the huge amount of data, typically captured with hundreds of cameras or thousands of lenslets.

Collaboration


Dive into the Yuichi Taguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge