Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marta Mrak is active.

Publication


Featured researches published by Marta Mrak.


conference on computer as a tool | 2003

Picture quality measures in image compression systems

Marta Mrak; Sonja Grgic; Mislav Grgic

A major problem in evaluating picture quality in image compression systems is the extreme difficulty in describing the type and amount of degradation in reconstructed image. Because of the inherent drawbacks associated with the subjective measures of picture quality, there has been a great deal of interest in developing an objective measure that can be used as a substitute. The aim of this paper is to examine a set of objective picture quality measures for application in still image compression systems and to highlight the correlation of these measures with subjective picture quality measures. Picture quality is measured using nine different objective picture quality measures and subjectively using mean opinion score (MOS) as a measure of perceived picture quality. The correlation between each objective measure and MOS is found. The effects of different image compression ratios are assessed and the best objective measures are proposed. Our results show that some objective measures correlate well with the perceived picture quality for a given compression algorithm but they are not reliable for an evaluation across different algorithms. So, we compared objective picture quality measures across different algorithms and we found measures, which serve well in all tested image compression systems.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Video Quality Evaluation Methodology and Verification Testing of HEVC Compression Performance

Thiow Keng Tan; Rajitha Weerakkody; Marta Mrak; Naeem Ramzan; Vittorio Baroncini; Jens-Rainer Ohm; Gary J. Sullivan

The High Efficiency Video Coding (HEVC) standard (ITU-T H.265 and ISO/IEC 23008-2) has been developed with the main goal of providing significantly improved video compression compared with its predecessors. In order to evaluate this goal, verification tests were conducted by the Joint Collaborative Team on Video Coding of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29. This paper presents the subjective and objective results of a verification test in which the performance of the new standard is compared with its highly successful predecessor, the Advanced Video Coding (AVC) video compression standard (ITU-T H.264 and ISO/IEC 14496-10). The test used video sequences with resolutions ranging from 480p up to ultra-high definition, encoded at various quality levels using the HEVC Main profile and the AVC High profile. In order to provide a clear evaluation, this paper also discusses various aspects for the analysis of the test results. The tests showed that bit rate savings of 59% on average can be achieved by HEVC for the same perceived video quality, which is higher than a bit rate saving of 44% demonstrated with the PSNR objective quality metric. However, it has been shown that the bit rates required to achieve good quality of compressed content, as well as the bit rate savings relative to AVC, are highly dependent on the characteristics of the tested content.


IEEE Transactions on Consumer Electronics | 2009

3D motion estimation for depth image coding in 3D video coding

Bunchat Kamolrat; W.A.C. Fernando; Marta Mrak; Ahmet M. Kondoz

In this paper, a new solution is introduced for the efficient compression of 3D video based on color and depth maps. While standard video codecs are designed for coding monoscopic videos, their application to depth maps is found to be suboptimal. With regard to the special properties of the depth maps, we propose an extension to conventional video coding in order to take advantage of object motion in the depth direction. Instead of performing a 2D motion search, as is common in conventional video codecs, we propose the use of a 3D motion search that is able to better exploit the temporal correlations of 3D content. In this new framework, the motion of blocks in depth maps are described using 3D motion vectors (x, y, z), representing the horizontal, vertical, and depth directions respectively. This leads to more accurate motion prediction and a smaller residual. The experimental results show that the proposed technique delivers an improvement in motion compensation, which leads to gains in compression efficiency.


international conference on image processing | 2009

Utilisation of edge adaptive upsampling in compression of depth map videos for enhanced free-viewpoint rendering

Erhan Ekmekcioglu; Marta Mrak; S. Worrall; Ahmet M. Kondoz

In this paper we propose a novel video object edge adaptive upsampling scheme for application in video-plus-depth and Multi-View plus Depth (MVD) video coding chains with reduced resolution. Proposed scheme is for improving the rate-distortion performance of reduced-resolution depth map coders taking into account the rendering distortion induced in free-viewpoint videos. The inherent loss in fine details due to downsampling, particularly at video object boundaries causes significant visual artefacts in rendered free-viewpoint images. The proposed edge adaptive upsampling filter allows the conservation and better reconstruction of such critical object boundaries. Furthermore, the proposed scheme does not require the edge information to be communicated to the decoder, as the edge information used in the adaptive upsampling is derived from the reconstructed colour video. Test results show that as much as 1.2 dB gain in free-viewpoint video quality can be achieved with the utilization of the proposed method compared to the scheme that uses the linear MPEG re-sampling filter. The proposed approach is suitable for video-plus-depth as well as MVD applications, in which it is critical to satisfy bandwidth constraints while maintaining high free-viewpoint image quality.


IEEE Transactions on Consumer Electronics | 2008

Joint source and channel coding for 3D video with depth image - based rendering

Bunchat Kamolrat; W.A.C. Fernando; Marta Mrak; Ahmet M. Kondoz

Following recent commercial availability of autostereoscopic 3D displays that allow 3D visual data to be viewed without the use of special headgear or glasses, it is anticipated that the applications of 3D video will increase rapidly in near future. In this paper we propose a joint source channel coding scheme for depth image-based rendering based 3D video coding. We considered different source and channel coding rates to find the optimum coding performance under a given channel bit rate for a WiMAX based communication channel. When the optimum bit allocation combination for color and depth image sequences are found, different protection levels have been considered for coding both image sequences. Finally, an optimum protection levels are proposed for the best video quality.


international conference on image processing | 2003

A context modeling algorithm and its application in video compression

Marta Mrak; Detlev Marpe; Thomas Wiegand

A new algorithm for context modeling of binary sources with application to video compression is presented. Our proposed method is based on a tree rearrangement and tree selection process for an optimized modeling of binary context trees. We demonstrate its use for adaptive context-based coding of selected syntax elements in a video coder. For that purpose we apply our proposed technique to the H.264/AVC standard and evaluate its performance for different sources and different quantization parameters. Experimental results show that by using our proposed algorithm coding gains similar or superior to those obtained with the H.264/AVC CABAC algorithm is achieved.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

High Dynamic Range Video Compression Exploiting Luminance Masking

Yang Zhang; Matteo Naccari; Dimitris Agrafiotis; Marta Mrak; David R. Bull

The human visual system (HVS) exhibits nonlinear sensitivity to the distortions introduced by lossy image and video coding. This effect is due to the luminance masking, contrast masking, and spatial and temporal frequency masking characteristics of the HVS. This paper proposes a novel perception-based quantization to remove nonvisible information in high dynamic range (HDR) color pixels by exploiting luminance masking so that the performance of the High Efficiency Video Coding (HEVC) standard is improved for HDR content. A profile scaling based on a tone-mapping curve computed for each HDR frame is introduced. The quantization step is then perceptually tuned on a transform unit basis. The proposed method has been integrated into the HEVC reference model for the HEVC range extensions (HM-RExt), and its performance was assessed by measuring the bitrate reduction against the HM-RExt. The results indicate that the proposed method achieves significant bitrate savings, up to 42.2%, with an average of 12.8%, compared with HEVC at the same quality (based on HDR-visible difference predictor-2 and subjective evaluations).


international conference on signal processing | 2004

On the influence of motion vector precision limiting in scalable video coding

Marta Mrak; Gck Abhayaratne; Ebroul Izquierdo

Recent studies on scalable video coding have not only substantiated the need for such technology but also made evident that many related problems remain open and need to be tackled if truly scalable video coding is to be achieved. One of these challenges relates to the coding of motion vectors. In conventional coders motion vectors are treated and coded in a nonprogressive manner. Since scalable video coding targets decoding at several resolutions and a wide range of quality levels, the motion information needs to be encoded in an adaptive way. We propose a simple, yet efficient, strategy for scalable motion vector coding. The results show improvements of resolution scalability performance at lower-bit rates, while overcoming any negative influence at high resolutions and bit-rates.


workshop on image analysis for multimedia interactive services | 2009

Global motion estimation using variable block sizes and its application to object segmentation

Marina Georgia Arvanitidou; Alexander Glantz; Andreas Krutz; Thomas Sikora; Marta Mrak; Ahmet M. Kondoz

Global motion is estimated either in the pixel domain or in block based domain. Until now, all the approaches regarding the latter are based on fixed sized blocks while the recent compression methods tend to use variable block sizes during motion estimation. In this paper we present a new procedure for global motion estimation based on a variable block size motion vector field. A block matching algorithm which is able to adapt the block size according to the motion complexity within the frame is used. The resulting motion vectors are employed for global motion estimation. Furthermore, binary foreground-background masks are created based on the frame-by-frame motion compensated differences by exploiting spatial conditions through anisotropic diffusion filtering. For global motion estimation the performance evaluation in terms of background PSNR shows an enhancement of more than 2.5 dB in the well-known “Stefan” sequence, compared to the conventional case of fixed block size, at a reasonable implementation complexity.


IEEE Journal of Selected Topics in Signal Processing | 2011

Combined Intra-Prediction for High-Efficiency Video Coding

Andrea Gabriellini; David Flynn; Marta Mrak; Thomas Davies

New activities in the video coding community are focused on the delivery of technologies that will enable economic handling of future visual formats at very high quality. The key characteristic of these new visual systems is the highly efficient compression of such content. In that context this paper presents a novel approach for intra-prediction in video coding based on the combination of spatial closed- and open-loop predictions. This new tool, called Combined Intra-Prediction (CIP), enables better prediction of frame pixels which is desirable for efficient video compression. The proposed tool addresses both the rate-distortion performance enhancement as well as low-complexity requirements that are imposed on codecs for targeted high-resolution content. The novel perspective CIP offers is that of exploiting redundancy not only between neighboring blocks but also within a coding block. While the proposed tool enables yet another way to exploit spatial redundancy within video frames, its main strength is being inexpensive and simple for implementation, which is a crucial requirement for video coding of demanding sources. As shown in this paper, the CIP can be flexibly modeled to support various coding settings, providing a gain of up to 4.5% YUV BD-rate for the video sequences in the challenging High-Efficiency Video Coding Test Model.

Collaboration


Dive into the Marta Mrak's collaboration.

Top Co-Authors

Avatar

Ebroul Izquierdo

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Nikola Sprljan

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Saverio G. Blasi

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toni Zgaljic

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge