Marek Domanski
Poznań University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marek Domanski.
IEEE Transactions on Circuits and Systems for Video Technology | 2000
Marek Domanski; Adam Luczak; Slawomir Mackowiak
The existing and standardized solutions for spatial scalability are not satisfactory, therefore new approaches are very actively being explored. The goal of this paper is to improve spatial scalability of MPEG-2 for progressive video. In order to avoid problems with too large bitstreams of the base layer produced by some of the hitherto proposed spatially scalable coders, spatio-temporal scalability is proposed for video compression systems. It is assumed that a coder produces two bitstreams, where the base-layer bitstream corresponds to pictures with reduced both spatial and temporal resolution while the enhancement layer bitstream is used to transmit the information needed to retrieve images with full spatial and temporal resolution. In the base layer, temporal resolution reduction is obtained by B-frame data partitioning, i.e., by placing each second frame (B-frame) in the enhancement layer. Subband (wavelet) analysis is used to provide spatial decomposition of the signal. Full compatibility with the MPEG-2 standard is ensured in the base layer, as compared to single-layer MPEG-2 encoding at bit rates below 6 Mbits/s, the bitrate overhead for scalability is less than 15% in most cases.
IEEE Transactions on Image Processing | 2013
Marek Domanski; Olgierd Stankiewicz; Krzysztof Wegner; Maciej Kurc; Jacek Konieczny; Jakub Siast; Jakub Stankowski; Robert Ratajczak; Tomasz Grajek
We propose a new coding technology for 3D video represented by multiple views and the respective depth maps. The proposed technology is demonstrated as an extension of the recently developed high efficiency video coding (HEVC). One base views are compressed into a standard bitstream (like in HEVC). The remaining views and the depth maps are compressed using new coding tools that mostly rely on view synthesis. In the decoder, those views and the depth maps are derived via synthesis in the 3D space from the decoded baseview and from data corresponding to small disoccluded regions. The shapes and locations of those disoccluded regions can be derived by the decoder without any side information transmitted. To achieve high compression efficiency, we propose several new tools such as depth-based motion prediction, joint high frequency layer coding, consistent depth representation, and nonlinear depth representation. The experiments show high compression efficiency of the proposed technology. The bitrate needed for transmission of two side views with depth maps is mostly less than 50% than that of the bitrate for a single-view video.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009
Krzysztof Klimaszewski; Krzysztof Wegner; Marek Domanski
The paper deals with prospective 3D video transmission systems that would use compression of both multiview video and depth maps. The paper addresses the problem of quality of views synthesized from other views transmitted together with depth information. For the state-of-the-art depth map estimation and view synthesize techniques, the paper proves that AVC/SVC-based Multiview Video Coding technique can be used for compression of both view pictures and depth maps. The paper reports extensive experiments where synthesized video quality has been estimated by use of both PSNR index and subjective assessment. Defined is the critical value of depth quantization parameter as a function of the reference view quantization parameter. For smaller depth map quantization parameters, depth map compression has negligible influence on fidelity of synthesized views.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2010
Jacek Konieczny; Marek Domanski
The paper deals with efficient exploitation of mutual correlation that exists in motion fields of individual views in multiview video. The paper describes a new technique for efficient representation of motion data in multiview video bitstreams that carry also depth maps. These depth maps may be used in order to derive motion information from neighboring views. Such inter-view prediction of motion vectors is the core idea of InterView Direct compression mode that is proposed in this paper. Application of the new mode yields bitrate reductions between 2% and 13% depending on an individual test sequence, compression scenario and variant of the state-of-the-art multiview compression reference technique. This improvement has been demonstrated by extensive experimental tests that used standard multiview test video sequences.
picture coding symposium | 2012
Marek Domanski; Tomasz Grajek; Damian Karwowski; Jacek Konieczny; Maciej Kurc; Adam Luczak; Robert Ratajczak; Jakub Siast; Olgierd Stankiewicz; Jakub Stankowski; Krzysztof Wegner
During the last two decades, a new technology generation of video compression was introduced about each 9 years. Each new compression-technology generation provides halving of necessary bitrates as compared to the last previous generation. This increasing single-view compression performance is related to increasing compression performance of multiview video coding. For multiview video with associated depth maps, additional significant bitrate reduction may be achieved. The paper reports the original compression technology that was designed and developed at Poznań University of Technology in response to MPEG Call for Proposals on 3D Video Coding Technology. The main idea of this technique is to predict very efficiently the side views and the depth maps from the base view.
international conference on image processing | 2012
Jakub Stankowski; Marek Domanski; Olgierd Stankiewicz; Jacek Konieczny; Jakub Siast; Krzysztof Wegner
The paper deals with multiview video coding using the new technology of High Efficiency Video Coding (HEVC). Implementation of multiview video coding in the framework of HEVC is described together with new specific tools proposed by the authors. Extensive experimental results are reported for compression performance comparison of MVC (ISO 14496-10), HEVC simulcast and two versions of proposed “multiview HEVC”. For “multiview HEVC” the results indicate significant bitrate reduction of about 50%, as compared to the state-of-the-art MVC technology standardized as a part of AVC (MPEG-4, H.264).
international conference on image processing | 2004
Rafa Lange; Lukasz Blaszak; Marek Domanski
Proposed is a coder that produces a layered video representation with layers corresponding to different spatial resolutions. A coder consists of based subcoders with independent motion estimation and compensation. Improved motion-vector encoding is provided for the enhancement layer. Other codec features include adaptive interpolation from the base layer, full AVC-compatibility of the base layer and syntax compatibility of the enhancement-layer bitstream. Interpolated reconstructed base-layer frames are used as additional reference frames in the enhancement layer. The respective prediction modes are embedded into prediction strategy of AVC. Experimental results prove that improved motion-vector encoding results in bitrate reduction of up to 13% of the enhancement-layer bitrate for motion vectors. As measured for all layers together, compression efficiency is significantly higher than for simulcast, but scalable codec complexity is only slightly higher than complexity of the respective simulcast codec.
picture coding symposium | 2015
Marek Domanski; Adrian Dziembowski; Dawid Mieloch; Adam Luczak; Olgierd Stankiewicz; Krzysztof Wegner
We deal with the processing of multiview video acquired by the use of practical thus relatively simple acquisition systems that have a limited number of cameras located around a scene on independent tripods. The real-camera locations are nearly arbitrary as it would be required in the real-world Free-Viewpoint Television systems. The appropriate test video sequences are also reported. We describe a family of original extensions and adaptations of the multiview video processing algorithms adapted to arbitrary camera positions around a scene. The techniques constitute the video processing chain for Free-Viewpoint Television as they are aimed at estimating the parameters of such a multi-camera system, video correction, depth estimation and virtual view synthesis. Moreover, we demonstrate the need for new compression technology capable of efficient compression of sparse convergent views. The experimental results for processing the proposed test sequences are reported.
international conference on image processing | 2003
Piotr Stec; Marek Domanski
The paper describes a novel unassisted, fully automatic segmentation technique applicable to colour natural video sequences. The algorithm uses the fast marching method for fast extraction of semantic objects from a frame in a video sequence. These objects are extracted by joint motion and colour analysis. The algorithm handles background in the same way as other objects, thus it does not need global motion compensation. The basic disadvantage of the fast marching method is related to unidirectional motion of an active contour. The new technique overcomes this difficulty by use of an enhancement step consisting in colour processing only. This processing step is performed on a small portion a frame only.
international conference on image processing | 2001
Marek Domanski; Slawomir Mackowiak
The paper describes a multi-layer video coder based on spatiotemporal scalability and data partitioning. The coder consists of two parts: a low-resolution coder and full-resolution coder. The first one encodes pictures with reduced spatial and temporal resolution. The data are partitioned into some base layers with fine granularity. The full-resolution encoder exploits interpolated images from the base layers. Here, again the output data can be partitioned into some layers. For the two-layer system, the bitrate overhead measured relative to the single layer MPEG-2 bitstream varies about 10% - 25% for progressive television test sequences. Further layers are related to bitrate overheads of about 3% per layer. The coder structure exhibits high level of compatibility with individual building blocks of MPEG-2 coders.