Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuto Kamikura is active.

Publication


Featured researches published by Kazuto Kamikura.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Multiview Video Coding Using View Interpolation and Color Correction

Kenji Yamamoto; Masaki Kitahara; Hideaki Kimata; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto; Shinya Shimizu; Kazuto Kamikura; Yoshiyuki Yashima

Neighboring views must be highly correlated in multiview video systems. We should therefore use various neighboring views to efficiently compress videos. There are many approaches to doing this. However, most of these treat pictures of other views in the same way as they treat pictures of the current view, i.e., pictures of other views are used as reference pictures (inter-view prediction). We introduce two approaches to improving compression efficiency in this paper. The first is by synthesizing pictures at a given time and a given position by using view interpolation and using them as reference pictures (view-interpolation prediction). In other words, we tried to compensate for geometry to obtain precise predictions. The second approach is to correct the luminance and chrominance of other views by using lookup tables to compensate for photoelectric variations in individual cameras. We implemented these ideas in H.264/AVC with inter-view prediction and confirmed that they worked well. The experimental results revealed that these ideas can reduce the number of generated bits by approximately 15% without loss of PSNR.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

View Scalable Multiview Video Coding Using 3-D Warping With Depth Map

Shinya Shimizu; Masaki Kitahara; Hideaki Kimata; Kazuto Kamikura; Yoshiyuki Yashima

Multiview video coding demands high compression rates as well as view scalability, which enables the video to be displayed on a multitude of different terminals. In order to achieve view scalability, it is necessary to limit the inter-view prediction structure. In this paper, we propose a new multiview video coding scheme that can improve the compression efficiency under such a limited inter-view prediction structure. All views are divided into two groups in the proposed scheme: base view and enhancement views. The proposed scheme first estimates a view-dependent geometry of the base view. It then uses a video encoder to encode the video of base view. The view-dependent geometry is also encoded by the video encoder. The scheme then generates prediction images of enhancement views from the decoded video and the view-dependent geometry by using image-based rendering techniques, and it makes residual signals for each enhancement view. Finally, it encodes residual signals by the conventional video encoder as if they were regular video signals. We implement one encoder that employs this scheme by using a depth map as the view-dependent geometry and 3-D warping as the view generation method. In order to increase the coding efficiency, we adopt the following three modifications: (1) object-based interpolation on 3-D warping; (2) depth estimation with consideration of rate-distortion costs; and (3) quarter-pel accuracy depth representation. Experiments show that the proposed scheme offers about 30% higher compression efficiency than the conventional scheme, even though one depth map video is added to the original multiview video.


IEEE Transactions on Circuits and Systems for Video Technology | 1997

Two-stage motion compensation using adaptive global MC and local affine MC

Hirohisa Jozawa; Kazuto Kamikura; Atsushi Sagata; Hiroshi Kotera; Hiroshi Watanabe

This paper describes a high-efficiency video coding method based on ITU-T H.263. To improve the coding efficiency of H.263, a two-stage motion compensation (MC) method is proposed, consisting of global MC (GMC) for predicting camera motion and local MC (LMC) for macroblock prediction. First, global motion such as panning, tilting, and zooming is estimated, and the global-motion-compensated image is produced for use as a reference in LMC. Next, LMC is performed both for the global-motion-compensated reference image and for the image without GMC. LMC employs an affine motion model in the context of H.263s overlapped block motion compensation. Using the overlapped block affine MC, rotation and scaling of small objects can be predicted, in addition to translational motion. In the proposed method, GMC is adaptively turned on/off for each macroblock since GMC cannot be used for prediction in all regions in a frame. In addition, either an affine or a translational motion model is adaptively selected in LMC for each macroblock. Simulation results show that the proposed video coding technique using the two-stage MC significantly outperforms H.263 under identical conditions, especially for sequences with fast camera motion. The performance improvements in peak-to-peak SNR (PSNR) are about 3 dB over the original H.263, which does not use the two-stage MC.


IEEE Transactions on Circuits and Systems for Video Technology | 1998

Global brightness-variation compensation for video coding

Kazuto Kamikura; Hiroshi Watanabe; Hirohisa Jozawa; Hiroshi Kotera; Susumu Ichinose

A global brightness-variation compensation (GBC) scheme is proposed to improve the coding efficiency for video scenes that contain global brightness-variations caused by fade in/out, camera-iris adjustment, flicker, illumination change, and so on. In this scheme, a set of two brightness-variation parameters, which represent multiplier and offset components of the brightness-variation in the whole frame, is estimated. The brightness-variation is then compensated using the parameters. We also propose a method to apply the GBC scheme to component signals for video coding. Furthermore, a block-by-block on/off control method for GBC is introduced to improve the coding performance even for scenes including local variations in brightness caused by camera flashes, spotlights, and the like. Simulation results show that the proposed GBC scheme with the on/off control method improves the peak signal-to-noise ratio (PSNR) by 2-4 dB when the coded scenes contain brightness-variations.


computer and information technology | 2004

System design of free viewpoint video communication

Hideaki Kimata; Masaki Kitahara; Kazuto Kamikura; Y. Yashimat; Toshiaki Fujii; Masayuki Tanimoto

We propose a free viewpoint video communication method that allows the user to change his/her viewing point and viewing direction freely. This application will be a next generation visual application in the near future. We propose suitable multiple view video coding method and communication protocol for this communication application, and demonstrate the coding efficiency of the proposed method. We also demonstrate that the developed viewer can generate the view of arbitrary viewpoint and view direction. This viewer uses the Ray-Space interpolation and extrapolation methods and generates a natural view.


international conference on multimedia and expo | 2006

Multi-View Video Coding using View Interpolation and Reference Picture Selection

Masaki Kitahara; Hideaki Kimata; Shinya Shimizu; Kazuto Kamikura; Yoshiyuki Yashima; Kenji Yamamoto; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto

We propose a new multi-view video coding method using adaptive selection of motion/disparity compensation based on H.264/AVC. One of the key points of the proposed method is the use of view interpolation as a tool for disparity compensation by assigning reference picture indices to interpolated images. Experimental results show that significant gains can be obtained compared to the conventional approach that was often used


international conference on multimedia and expo | 2004

Hierarchical reference picture selection method for temporal scalability beyond H.264

Hideaki Kimata; Masaki Kitahara; Kazuto Kamikura; Yoshiyuki Yashima

Temporal scalability is effective to adapt the bitstream adaptation for various capabilities of the video terminals and the delivery network, e.g. the processing speed of video terminals and the transmission rate, respectively. This technique uses the reference picture selection method on both the current layer and lower layers. The conventional scalability selects only from the last previous picture of the current layer and that of the first lower layer. This paper proposes a novel prediction scheme, the HRPS (hierarchical reference picture selection) method in which the reference picture is selected from more previous pictures in more layers, in order to improve coding efficiency, keeping the temporal scalability functionality. The proposed method is developed with a modification of the H.264. This paper demonstrates the effectiveness of the HRPS compared with the conventional temporal scalable methods.


international conference on multimedia and expo | 2008

Real-time MVC viewer for free viewpoint navigation

Hideaki Kimata; Shinya Shimizu; Yutaka Kunita; Megumi Isogai; Kazuto Kamikura; Yoshiyuki Yashima

A real-time MVC viewer for free-viewpoint navigation is demonstrated. MVC, a video coding standard for multi-view video, is an extension of the state-of-the-art H.264/AVC. Realizing parallel decoding and view synthesis is essential to develop real-time viewers. A new decoder architecture is introduced for the proposed viewer. In addition, a file format for MVC is introduced. View synthesis is performed entirely within GPU. It is demonstrated that the MVC viewer can decode a MVC bitstream and generate a virtual camera image in real-time.


visual communications and image processing | 2009

Intra prediction with spatial gradient

Shohei Matsuo; Seishi Takamura; Kazuto Kamikura; Yoshiyuki Yashima

Spatial intra prediction has been added recently to the latest video coding standard H.264/AVC. In the intra prediction of H.264/AVC, there are 9, 9 and 4 prediction modes for 4×4, 8×8 and 16×16 blocks, respectively. Prediction signals are generated by using one or several reference pixels. The value of a reference pixel is copied as the prediction value. In some prediction modes, we calculate a weighted mean by averaging several pixels. The same prediction value is copied to several of the pixels lying in the prediction direction. However, if original image has patterns like gradations, the residual energy could increase which would result in low coding efficiency. In this paper, we propose a new intra prediction that generates prediction signals with a spatial gradient to deal with this problem. Simulation results show that it improves the picture quality and reduce the bit-rate by about 0.14 dB and 1.0 % on average for CIF sequences, respectively. It is also confirmed that our method is effective at high bit-rates.


picture coding symposium | 2009

Adaptive down-sampling of frame-rate for high frame-rate video

Yukihiro Bandoh; Seishi Takamura; Kazuto Kamikura; Yoshiyuki Yashima

Over the past decade, video acquisition rates, which had been 24 Hz (cinema), 30–60 Hz (webcam) or 50–60 Hz (SD/HDcam), has broken through to reach 1000 Hz. In order to display these high frame-rate video signals on current display devices in real time, they must be down-sampled first. This study proposes a down-sampling method suitable for high frame-rate video signals. It is designed with the goal of reducing the inter-frame prediction error. Our method can improve the PSNR of prediction signal by 0.13 [dB] to 0.23 [dB] compared to simple sub-sampling with constant interval.

Collaboration


Dive into the Kazuto Kamikura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Yoshitome

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenji Yamamoto

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomohiro Yendo

Nagaoka University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge