Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jani Lainema is active.

Publication


Featured researches published by Jani Lainema.


IEEE Transactions on Circuits and Systems for Video Technology | 2003

Adaptive deblocking filter

Peter List; Anthony Joch; Jani Lainema; Gisle Bjøntegaard; Marta Karczewicz

This paper describes the adaptive deblocking filter used in the H.264/MPEG-4 AVC video coding standard. The filter performs simple operations to detect and analyze artifacts on coded block boundaries and attenuates those by applying a selected filter.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Intra Coding of the HEVC Standard

Jani Lainema; Frank Jan Bossen; Woo-Jin Han; Jung-Hye Min; Kemal Ugur

This paper provides an overview of the intra coding techniques in the High Efficiency Video Coding (HEVC) standard being developed by the Joint Collaborative Team on Video Coding (JCT-VC). The intra coding framework of HEVC follows that of traditional hybrid codecs and is built on spatial sample prediction followed by transform coding and postprocessing steps. Novel features contributing to the increased compression efficiency include a quadtree-based variable block size coding structure, block-size agnostic angular and planar prediction, adaptive pre- and postfiltering, and prediction direction-based transform coefficient scanning. This paper discusses the design principles applied during the development of the new intra coding methods and analyzes the compression performance of the individual tools. Computational complexity of the introduced intra prediction algorithms is analyzed both by deriving operational cycle counts and benchmarking an optimized implementation. Using objective metrics, the bitrate reduction provided by the HEVC intra coding over the H.264/advanced video coding reference is reported to be 22% on average and up to 36%. Significant subjective picture quality improvements are also reported when comparing the resulting pictures at fixed bitrate.


EURASIP Journal on Advances in Signal Processing | 2009

The emerging MVC standard for 3D video services

Ying Chen; Ye-Kui Wang; Kemal Ugur; Miska Hannuksela; Jani Lainema; Moncef Gabbouj

Multiview video has gained a wide interest recently. The huge amount of data needed to be processed by multiview applications is a heavy burden for both transmission and decoding. The joint video team has recently devoted part of its effort to extend the widely deployed H.264/AVC standard to handle multiview video coding (MVC). The MVC extension of H.264/AVC includes a number of new techniques for improved coding efficiency, reduced decoding complexity, and new functionalities for multiview operations. MVC takes advantage of some of the interfaces and transport mechanisms introduced for the scalable video coding (SVC) extension of H.264/AVC, but the system level integration of MVC is conceptually more challenging as the decoder output may contain more than one view and can consist of any combination of the views with any temporal level. The generation of all the output views also requires careful consideration and control of the available decoder resources. In this paper, multiview applications and solutions to support generic multiview as well as 3D services are introduced. The proposed solutions, which have been adopted to the draft MVC specification, cover a wide range of requirements for 3D video related to interface, transport of the MVC bitstreams, and MVC decoder resource management. The features that have been introduced in MVC to support these solutions include marking of reference pictures, supporting for efficient view switching, structuring of the bitstream, signalling of view scalability supplemental enhancement information (SEI) and parallel decoding SEI.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

High Performance, Low Complexity Video Coding and the Emerging HEVC Standard

Kemal Ugur; Kenneth Andersson; Arild Fuldseth; Gisle Bjontegaard; Lars Petter Endresen; Jani Lainema; Antti Hallapuro; Justin Ridge; Dmytro Rusanovskyy; Cixun Zhang; Andrey Norkin; Clinton Priddle; Thomas Rusert; Jonatan Samuelsson; Rickard Sjöberg; Zhuangfei Wu

This paper describes a low complexity video codec with high coding efficiency. It was proposed to the high efficiency video coding (HEVC) standardization effort of moving picture experts group and video coding experts group, and has been partially adopted into the initial HEVC test model under consideration design. The proposal utilizes a quadtree-based coding structure with support for macroblocks of size 64 × 64, 32 × 32, and 16 × 16 pixels. Entropy coding is performed using a low complexity variable length coding scheme with improved context adaptation compared to the context adaptive variable length coding design in H.264/AVC. The proposals interpolation and deblocking filter designs improve coding efficiency, yet have low complexity. Finally, intra-picture coding methods have been improved to provide better subjective quality than H.264/AVC. The subjective quality of the proposed codec has been evaluated extensively within the HEVC project, with results indicating that similar visual quality to H.264/AVC High Profile anchors is achieved, measured by mean opinion score, using significantly fewer bits. Coding efficiency improvements are achieved with lower complexity than the H.264/AVC Baseline Profile, particularly suiting the proposal for high resolution, high quality applications in resource-constrained environments.


international symposium on circuits and systems | 2012

Complexity analysis of next-generation HEVC decoder

Marko Viitanen; Jarno Vanne; Timo D. Hämäläinen; Moncef Gabbouj; Jani Lainema

This paper analyzes the complexity of the HEVC video decoder being developed by the JCT-VC community. The HEVC reference decoder HM 3.1 is profiled with Intel VTune on Intel Core 2 Duo processor. The analysis covers both Low Complexity (LC) and High Efficiency (HE) settings for resolutions varying from WQVGA (416 × 240 pixels) up to 1600p (2560 × 1600 pixels). The yielded cycle-accurate results are compared with the respective results of H.264/AVC Baseline Profile (BP) and High Profile (HiP) reference decoders. HEVC offers significant improvement in compression efficiency over H.264/AVC: the average BD-rate saving of LC is around 51% over BP whereas the BD-rate gain of HE is around 45% over HiP. However, the average decoding complexities of LC and HE are increased by 61% and 87% over BP and HiP, respectively. In LC, the most complex functions are motion compensation (MC) and loop filtering (LF) that account on average for 50% and 14% of the decoder complexity. The decoding complexity of HE configuration is on average 42% higher than that of the LC configuration. Majority of the difference is caused by extra LF stages. In HE, the complexities of MC and LF are 37% and 32%, respectively. In practice, a standard 3 GHz dual core processor is expected to be able to decode 1080p HEVC content in real-time.


multimedia signal processing | 2011

Angular intra prediction in High Efficiency Video Coding (HEVC)

Jani Lainema; Kemal Ugur

New video coding solutions, such as the HEVC (High Efficiency Video Coding) standard being developed by JCT-VC (Joint Collaborative Team on Video Coding), are typically designed for high resolution video content. Increasing video resolution creates two basic requirements for practical video codecs; those need to be able to provide compression efficiency superior to prior video coding solutions and the computational requirements need to be aligned with the foreseeable hardware platforms. This paper proposes an intra prediction method which is designed to provide high compression efficiency and which can be implemented effectively in resource constrained environments making it applicable to wide range of use cases. When designing the method, special attention was given to the algorithmic definition of the prediction sample generation, in order to be able to utilize the same reconstruction process at different block sizes. The proposed method outperforms earlier variations of the same family of technologies significantly and consistently across different classes of video material, and has recently been adopted as the directional intra prediction method for the draft HEVC standard. Experimental results show that the proposed method outperforms the H.264/AVC intra prediction approach on average by 4.8 %. For sequences with dominant directional structures, the coding efficiency gains become more significant and exceed 10 %.


IEEE Journal of Selected Topics in Signal Processing | 2013

Motion Compensated Prediction and Interpolation Filter Design in H.265/HEVC

Kemal Ugur; Alexander Alshin; Elena Alshina; Frank Jan Bossen; Woo-Jin Han; Jeong-hoon Park; Jani Lainema

Coding efficiency gains in the new High Efficiency Video Coding (H.265/HEVC) video coding standard are achieved by improving many aspects of the traditional hybrid coding framework. Motion compensated prediction, and in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the H.265/HEVC standard. First, the improvements of H.265/HEVC interpolation filtering over H.264/AVC are presented. These improvements include novel filter coefficient design with an increased number of taps and utilizing higher precision operations in interpolation filter computations. Then, the computational complexity is analyzed, both from theoretical and practical perspectives. Theoretical complexity analysis is done by studying the worst-case complexity analytically, whereas practical analysis is done by profiling an optimized decoder implementation. Coding efficiency improvements over the H.264/AVC interpolation filter are studied and experimental results are presented. They show a 4.0% average bitrate reduction for the luma component and 11.3% average bitrate reduction for the chroma components. The coding efficiency gains are significant for some video sequences and can reach up to 21.7%.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Video Coding With Low-Complexity Directional Adaptive Interpolation Filters

Dmytro Rusanovskyy; Kemal Ugur; Antti Hallapuro; Jani Lainema; Moncef Gabbouj

A novel adaptive interpolation filter structure for video coding with motion-compensated prediction is presented in this letter. The proposed scheme uses an independent directional adaptive interpolation filter for each sub-pixel location. The Wiener interpolation filter coefficients are computed analytically for each inter-coded frame at the encoder side and transmitted to the decoder. Experimental results show that the proposed method achieves up to 1.1 dB coding gain and a 15% average bit-rate reduction for high-resolution video materials compared to the standard nonadaptive interpolation scheme of H.264/AVC, while requiring 36% fewer arithmetic operations for interpolation. The proposed interpolation can be implemented in exactly 16-bit arithmetic, thus it can have important use-cases in mobile multimedia environments where the computational resources are severely constrained.


picture coding symposium | 2010

Low complexity video coding and the emerging HEVC standard

Kemal Ugur; Kenneth Andersson; Arild Fuldseth; Gisle Bjontegaard; Lars Petter Endresen; Jani Lainema; Antti Hallapuro; Justin Ridge; Dmytro Rusanovskyy; Cixun Zhang; Andrey Norkin; Clinton Priddle; Thomas Rusert; Jonatan Samuelsson; Rickard Sjöberg; Zhuangfei Wu

This paper describes a low complexity video codec with high coding efficiency. It was proposed to the High Efficiency Video Coding (HEVC) standardization effort of MPEG and VCEG, and has been partially adopted into the initial HEVC Test Model under Consideration design. The proposal utilizes a quad-tree structure with a support of large macroblocks of size 64×64 and 32×32, in addition to macroblocks of size 16×16. The entropy coding is done using a low complexity variable length coding based scheme with improved context adaptation over the H.264/AVC design. In addition, the proposal includes improved interpolation and deblocking filters, giving better coding efficiency while having low complexity. Finally, an improved intra coding method is presented. The subjective quality of the proposal is evaluated extensively and the results show that the proposed method achieves similar visual quality as H.264/AVC High Profile anchors with around 50% and 35% bit rate reduction for low delay and random-access experiments respectively at high definition sequences. This is achieved with less complexity than H.264/AVC Baseline Profile, making the proposal especially suitable for resource constrained environments.


international conference on acoustics, speech, and signal processing | 2009

Video coding using Variable Block-Size Spatially Varying Transforms

Cixun Zhang; Kemal Ugur; Jani Lainema; Moncef Gabbouj

In our previous work, we introduced Spatially Varying Transforms (SVT) for video coding, where the location of the transform block within the macroblock is not fixed but varying. In this paper, we extend this concept and present a novel method, called Variable Block-size Spatially Varying Transforms (VBSVT). VBSVT utilizes Variable Block-size Transforms (VBT) in the SVT framework, and is shown to be more preferable for coding prediction error with different characteristics than fixed block-size SVT and also the standard methods that use fixed or adaptive block sizes at fixed spatial locations. In addition, VBSVT has similar decoding complexity with fixed block-size SVT and lower decoding complexity compared to standard methods as only a portion of the prediction error needs to be decoded. Experimental results show that, VBSVT achieves 4.1% gain over H.264/AVC on average over a wide range of test set. Gains become more significant at high quality levels and go up to 13.5%, which makes the proposed algorithm very suitable for future video coding solutions focusing on high fidelity applications.

Collaboration


Dive into the Jani Lainema's collaboration.

Researchain Logo
Decentralizing Knowledge