Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirohisa Jozawa is active.

Publication


Featured researches published by Hirohisa Jozawa.


IEEE Transactions on Circuits and Systems for Video Technology | 1997

Two-stage motion compensation using adaptive global MC and local affine MC

Hirohisa Jozawa; Kazuto Kamikura; Atsushi Sagata; Hiroshi Kotera; Hiroshi Watanabe

This paper describes a high-efficiency video coding method based on ITU-T H.263. To improve the coding efficiency of H.263, a two-stage motion compensation (MC) method is proposed, consisting of global MC (GMC) for predicting camera motion and local MC (LMC) for macroblock prediction. First, global motion such as panning, tilting, and zooming is estimated, and the global-motion-compensated image is produced for use as a reference in LMC. Next, LMC is performed both for the global-motion-compensated reference image and for the image without GMC. LMC employs an affine motion model in the context of H.263s overlapped block motion compensation. Using the overlapped block affine MC, rotation and scaling of small objects can be predicted, in addition to translational motion. In the proposed method, GMC is adaptively turned on/off for each macroblock since GMC cannot be used for prediction in all regions in a frame. In addition, either an affine or a translational motion model is adaptively selected in LMC for each macroblock. Simulation results show that the proposed video coding technique using the two-stage MC significantly outperforms H.263 under identical conditions, especially for sequences with fast camera motion. The performance improvements in peak-to-peak SNR (PSNR) are about 3 dB over the original H.263, which does not use the two-stage MC.


IEEE Transactions on Circuits and Systems for Video Technology | 1998

Global brightness-variation compensation for video coding

Kazuto Kamikura; Hiroshi Watanabe; Hirohisa Jozawa; Hiroshi Kotera; Susumu Ichinose

A global brightness-variation compensation (GBC) scheme is proposed to improve the coding efficiency for video scenes that contain global brightness-variations caused by fade in/out, camera-iris adjustment, flicker, illumination change, and so on. In this scheme, a set of two brightness-variation parameters, which represent multiplier and offset components of the brightness-variation in the whole frame, is estimated. The brightness-variation is then compensated using the parameters. We also propose a method to apply the GBC scheme to component signals for video coding. Furthermore, a block-by-block on/off control method for GBC is introduced to improve the coding performance even for scenes including local variations in brightness caused by camera flashes, spotlights, and the like. Simulation results show that the proposed GBC scheme with the on/off control method improves the peak signal-to-noise ratio (PSNR) by 2-4 dB when the coded scenes contain brightness-variations.


international conference on image processing | 2010

A coding method for high bit-depth images based on optimized bit-depth transform

Takeshi Ito; Yukihiro Bandoh; Takamura Seishi; Hirohisa Jozawa

In recent years, high bit-depth (HBD) images with 10 [bits / channel] or more are being used more often for their improved image quality. However, the size of HBD images become large and so more effective encoding techniques are needed. This paper considers the bit-depth transform process from the view point of minimizing bit-depth transform error. The minimization method is designed according to an optimum quantization algorithm. As a result, our method can improve the coding efficiency by an average of 11 [%] compared to the conventional method with AVC/H.264.


visual communications and image processing | 1994

Segment-based video coding using an affine motion model

Hirohisa Jozawa

This paper describes a segment-based motion compensation (MC) method where object motion is described using an affine motion model. In the proposed MC method, the shape of the objects is obtained using 5D K-mean clustering (employing three color components and two coordinates). Motion estimation and compensation employing the affine motion model are performed on each arbitrarily shaped object in the current frame. By using the affine motion model, rotation and scaling of objects can be compensated for in addition to translational motion. The residual between the current frame and the predicted one is DCT-coded and multiplexed with the motion and shape parameters. Simulation results show that the interframe prediction error is significantly reduced compared to the conventional block-based method. The SNR of the coded sequences obtained using the proposed method (modified MPEG1 with segment-based affine MC) is about 0.8 - 2 dB higher than that using the conventional one (original MPEG1 with block-based MC).


visual communications and image processing | 1997

Global brightness-fluctuation compensation in video coding

Kazuto Kamikura; Hirohisa Jozawa; Hiroshi Watanabe; Hiroshi Kotera

A global brightness-fluctuation compensation (GBC) scheme is proposed to improve coding efficiency for video scenes that contain global brightness fluctuations caused by fade- in/out, camera-iris adjustment, and so on. In this scheme, a set of two brightness-fluctuation parameters, which represent contrast and offset components of the brightness fluctuation in the whole frame, is first estimated. The brightness fluctuation is then compensated using the parameters. Furthermore, a block-by-clock ON/OFF control method for GBC is introduced to improve coding performance even for scenes including local fluctuations in brightness caused by camera flashed, spotlights, and the like. In this method, GBC is performed only on the blocks where the sum of the squared error produced using GBC is less than that produced without GBC. Simulation results show that the proposed GBC scheme with the ON/OFF control method greatly improves coding efficiency, especially for sequences with considerable global brightness fluctuation.


international conference on consumer electronics | 2012

MVC real-time video encoder for full-HDTV 3D video

Mitsuo Ikeda; Takayuki Onishi; Takashi Sano; Atsushi Sagata; Hiroe Iwasaki; Yasuyuki Nakajima; Koyo Nitta; Yasuko Takahashi; Kazuya Yokohari; Daisuke Kobayashi; Kazuto Kamikura; Hirohisa Jozawa

3D video technologies such as 3D cameras, displays, and video-processing have become more important for achieving high quality and immersive video services. We propose an H.264 MVC encoder architecture for real-time 3D video distribution and transmission. We also present the first-ever successful development of a full-HDTV real-time MVC encoder.


picture coding symposium | 2010

Enhanced region-based adaptive interpolation filter

Shohei Matsuo; Yukihiro Bandoh; Seishi Takamura; Hirohisa Jozawa

Motion compensation with quarter-pel accuracy was added to H.264/AVC to improve the coding efficiency of images exhibiting fractional-pel movement. To enlarge the reference pictures, a fixed 6-tap filter is used. However, the values of the filter coefficients are constant regardless of the characteristic of the input video. An improved interpolation filter, called the Adaptive Interpolation Filter (AIF), that optimizes the filter coefficients on a frame-by-frame basis was proposed to solve the problem. However, when the image is divided into multiple regions, each of which has different characteristics, the coding efficiency could be futher improved by performing optimization on a region-by-region basis. Therefore, we propose a Region-Based AIF (RBAIF) that takes account of image locality. Simulations show that RBAIF offers about 0.43 point higher coding gain than the conventional AIF.


picture coding symposium | 2010

Generating subject oriented codec by evolutionary approach

Masaaki Matsumura; Seishi Takamura; Hirohisa Jozawa

Many image/video codecs are constructed by the combination of various coding tools such as block division/scanning, branch selection and entropy coders. Codec researchers are developing new coding tools, and seeking versatile combinations that offer improved coding efficiency for various images/videos. However, because of the huge amount of the combination, deriving the best combination is impossible by man-power seeking. In this paper, we propose an automatic optimization method for deriving the combination that suits for categorized pictures. We prepare some categorised pictures, and optimize the combination for each category. In the case of optimization for lossless image coding, our method achieves a bit-rate reduction of over 2.8% (maximum) compared to the combination that offers the best bit-rate averagely prepared beforehand.


visual communications and image processing | 1992

Video coding with motion-compensated subband/wavelet decomposition

Hirohisa Jozawa; Hiroshi Watanabe

A hybrid video coding scheme using motion compensated subband decomposition is proposed. In the proposed method, interframe prediction is performed for the subband domain data. Subband data for interframe prediction are obtained by applying analysis filter to the image blocks already shifted by the amount of estimated motion. Simulated results show that the efficient of subband coding is significantly improved by the proposed hybrid coding method. Motion compensated subband decomposition is well suitable for subband/wavelet filter banks.


picture coding symposium | 2012

Automatic construction of nonlinear denoising filter for video coding

Seishi Takamura; Hirohisa Jozawa

Current video coding technologies such as H.264/AVC and HEVC employ in-loop filters such as deblocking filter, sample adaptive offset and adaptive loop filter. This paper aims to develop a new type of loop filter using an evolutionary method. Generated filter is not necessarily linear and, most distinctively, content-specific. Preliminary results show that proposed filter achieves the bit rate reduction of 0.62-1.01 % against HM4.0 (High Efficiency configuration), a test model of HEVC, the state-of-the-art coding technology.

Collaboration


Dive into the Hirohisa Jozawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshiyuki Yashima

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge