Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takeshi Chujoh is active.

Publication


Featured researches published by Takeshi Chujoh.


international conference on image processing | 2007

Block Based Extra/Inter-Polating Prediction for Intra Coding

Taichiro Shiodera; Akiyuki Tanizawa; Takeshi Chujoh

In this paper, we propose an efficient intra coding method. The method involves adding an extra/inter-polating prediction method to the conventional H.264/MPEG-4 AVC (H.264) intra prediction method. The proposal consists of two parts. One is to change the sub-block coding order in macroblock (MB). The bottom-right sub-block in MB is predicted using the reference pels on the upper and left side of the sub-block firstly. After that, the other sub-blocks are predicted using not only the reference pels on the upper and/or left side but also ones on the bottom and/or right side of each sub-block. The other part is to use an intra bidirectional prediction method which can use the interpolation of the prediction from upper/left pels and the prediction from bottom/right pels. Experimental results show that our method improves bit reduction by up to 7.7% at the same PSNR compared to H.264.


IEEE Journal of Selected Topics in Signal Processing | 2013

Adaptive Loop Filtering for Video Coding

Chia-Yang Tsai; Ching-Yeh Chen; Tomoo Yamakage; In Suk Chong; Yu-Wen Huang; Chih-Ming Fu; Takayuki Itoh; Takashi Watanabe; Takeshi Chujoh; Marta Karczewicz; Shaw-Min Lei

Adaptive loop filtering for video coding is to minimize the mean square error between original samples and decoded samples by using Wiener-based adaptive filter. The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filters to different regions or blocks in a picture. In addition to filter adaptation, filter on/off control at coding tree unit (CTU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set, and filter on/off flags of CTUs are interleaved at CTU level in the slice data. This syntax design not only supports picture level optimization but also achieves a low encoding latency. Simulation results show that the ALF can achieve on average 7% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, without special attention to optimization in C++ code.


international conference on image processing | 2009

In-loop filter using block-based filter control for video coding

Takashi Watanabe; Goki Yasuda; Akiyuki Tanizawa; Takeshi Chujoh; Tomoo Yamakage

In this paper, a method of improving coding efficiency is proposed by using the Wiener filter as an in-loop filter. The Wiener filter can minimize the mean square error between the input image and the decoded image. However, errors of some pixels increase by filtering process. Since the filtered pixels are used for motion-compensated prediction, these errors are propagated to the subsequent images. The proposed method divides the decoded image into some fixed blocks, and decides whether to apply the filter for each block adaptively. As a result, by preventing the increase in errors after the filtering process, the coding efficiency can be improved. Experimental results show that the proposed method achieves bitrate reduction of up to 33.9% in Baseline Profile and up to 33.0 % in High Profile at the same PSNR compared to H.264.


international conference on image processing | 2004

A study on fast rate-distortion optimized coding mode decision for H.264

Akiyuki Tanizawa; Shinichiro Koto; Takeshi Chujoh; Yoshihiro Kikuchi

The H.264 video coding standard can achieve considerably higher coding efficiency than any other existing standards by deciding the best mode among many prediction modes and various sixes of prediction blocks. Although the coding efficiency is improved by using Lagrange optimization for the mode decision, computational complexity increases significantly at the encoder. In this paper, we propose the fast rate-distortion optimization method for the hierarchical and adaptive coding mode decision in order to reduce the number of candidates for the best coding mode.


Proceedings of SPIE | 2012

The adaptive loop filtering techniques in the HEVC standard

Ching-Yeh Chen; Chia-Yang Tsai; Yu-Wen Huang; Tomoo Yamakage; In Suk Chong; Chih-Ming Fu; Takayuki Itoh; Takashi Watanabe; Takeshi Chujoh; Marta Karczewicz; Shaw-Min Lei

This article introduces adaptive loop filtering (ALF) techniques being considered for the HEVC standard. The key idea of ALF is to minimize the mean square error between original pixels and decoded pixels using Wiener-based adaptive filter coefficients. ALF is located at the last processing stage of each picture and can be regarded as a tool trying to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filter to different region in a picture. In addition to filter adaptation, filter on/off control at largest coding unit (LCU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set (APS), and filter on/off flags of LCUs are interleaved at LCU level in the slice data. Besides supporting picture-based optimization of ALF, the syntax design can support low delay applications as well. When the filter coefficients in APS are trained by using a previous picture, filter on/off decisions can be made on the fly during encoding of LCUs, so the encoding latency is only one LCU. Simulation results show that the ALF can achieve on average 5% bit rate reduction and up to 27% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, with un-optimized C++ codes in software.


international conference on image processing | 2003

Adaptive bi-predictive video coding using temporal extrapolation

Shinichiro Koto; Takeshi Chujoh; Yoshihiro Kikuchi

In this paper, we propose a new multi-frame bi-predictive motion compensation method using a temporal linear extrapolation technique. Significant coding gain is obtained by the proposed method, especially for fading or dissolving scenes, which are among the most difficult scenes to compress by conventional coding standards. Moreover, stable improved coding efficiency is obtained, using block by block adaptation combined with multiframe averaging prediction. The number of reference frames for the proposed temporal prediction method is limited to two successive frames which are already coded. And therefore only addition, subtraction and shift operation are needed to generate the prediction signal of both extrapolation and averaging. As for encoding, automatic adaptation is performed without scene characteristics detection, such as fade-in, fade-out, and dissolve. Therefore, additional computational power for both encoder and decoder to use the proposed method is very small.


international conference on image processing | 2012

Multi-directional implicit weighted prediction based on image characteristics of reference pictures for inter coding

Akiyuki Tanizawa; Takeshi Chujoh; Tomoo Yamakage

Weighted prediction (WP) in H.264/MPEG-4 AVC (H.264) is an efficient motion compensation technique to predict illumination variation and consists of two modes: an explicit WP (EWP) and an implicit WP (IWP). Since in IWP, only weighting factors are implicitly derived from reference pictures used for bi-prediction, and the offsets are always set to 0, there are issues in that IWP cannot utilize the optimal WP parameters, and cannot be applied to uni-prediction. In order to overcome these issues, this paper presents a novel decoding process to derive WP parameters implicitly by using the Multi-Directional Implicit Weighted Prediction (MD-IWP) method. WP parameters are derived using the image characteristics of reference pictures. By linearly predicting the image characteristics of the target picture using different reference pictures stored in the decoded picture buffer, WP parameters of uni-prediction can also be derived. This method can achieve 15% to 50% bitrate reduction compared to the conventional IWP method. This gain is almost the same or better than in the case of EWP with the simple 1-pass parameter estimation method.


international conference on image processing | 2013

Efficient weighted prediction parameter signaling using parameter prediction and direct derivation

Akiyuki Tanizawa; Takeshi Chujoh

Weighted prediction (WP) is an efficient motion compensation technique to predict temporal illumination variation. It has been introduced in H.264/MPEG-4 AVC (Advanced Video Coding, H.264) and consists of two modes: an explicit WP (EWP) and an implicit WP (IWP). Since in EWP, a weighting factor and an offset for each reference picture are explicitly transmitted to the decoder, an overhead of these parameters affects the coding efficiency. Since in H.264-based EWP, these parameters are directly encoded without prediction, it is difficult to apply WP for low bitrate applications. In order to reduce the overhead of them, this paper presents an efficient weighted prediction parameter signaling method towards High Efficiency Video Coding (HEVC) standard by introducing a WP parameter prediction and a direct WP parameter derivation. A weighting factor and an offset are predicted using the characteristics of WP parameters and entropy-coded. Moreover, when there are the same entries in reference pictures for prediction list 0 and 1 in the case of B-slices, a set of WP parameters for prediction list 1 is directly derived from that for prediction list 0. The experimental results show that our method can achieve up to 66% overhead bitrate reduction and up to 7% overall bitrate reduction compared to the conventional H.264-based EWP.


The Journal of The Institute of Image Information and Television Engineers | 1997

Mobile Image Media and Picture Coding. Error Resilient Motion Picture Coding for Mobile Video Communication.

Yoshihiro Kikuchi; Takeshi Chujoh; Takeshi Nagai; Toshiaki Watanabe

A motion picture coding scheme with high error resiliency that is applicable to wireless communication is proposed. Two methods are introduced to minimize the degradation of decoded images : re-synchronization method recovers synchronization from bit-slip of a VLC and two-way decoding using reversible VLC. To recover erroneous decoded images, an adaptive refreshing method is applied in which the encoder selects non-stationary areas for refresh by using the coding mode of the past frames. Simulation results show that the proposed scheme has high error resiliency and is applicable to high error prone environments with BER larger than 0.1%.


Archive | 2003

Video encoding method and apparatus and video decoding method and apparatus

Shinichiro Koto; Takeshi Chujoh; Yoshihiro Kikuchi; Takeshi Nagai; Wataru Asano

Collaboration


Dive into the Takeshi Chujoh's collaboration.

Researchain Logo
Decentralizing Knowledge