Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Luczak is active.

Publication


Featured researches published by Adam Luczak.


IEEE Transactions on Circuits and Systems for Video Technology | 2000

Spatio-temporal scalability for MPEG video coding

Marek Domanski; Adam Luczak; Slawomir Mackowiak

The existing and standardized solutions for spatial scalability are not satisfactory, therefore new approaches are very actively being explored. The goal of this paper is to improve spatial scalability of MPEG-2 for progressive video. In order to avoid problems with too large bitstreams of the base layer produced by some of the hitherto proposed spatially scalable coders, spatio-temporal scalability is proposed for video compression systems. It is assumed that a coder produces two bitstreams, where the base-layer bitstream corresponds to pictures with reduced both spatial and temporal resolution while the enhancement layer bitstream is used to transmit the information needed to retrieve images with full spatial and temporal resolution. In the base layer, temporal resolution reduction is obtained by B-frame data partitioning, i.e., by placing each second frame (B-frame) in the enhancement layer. Subband (wavelet) analysis is used to provide spatial decomposition of the signal. Full compatibility with the MPEG-2 standard is ensured in the base layer, as compared to single-layer MPEG-2 encoding at bit rates below 6 Mbits/s, the bitrate overhead for scalability is less than 15% in most cases.


picture coding symposium | 2012

Coding of multiple video+depth using HEVC technology and reduced representations of side views and depth maps

Marek Domanski; Tomasz Grajek; Damian Karwowski; Jacek Konieczny; Maciej Kurc; Adam Luczak; Robert Ratajczak; Jakub Siast; Olgierd Stankiewicz; Jakub Stankowski; Krzysztof Wegner

During the last two decades, a new technology generation of video compression was introduced about each 9 years. Each new compression-technology generation provides halving of necessary bitrates as compared to the last previous generation. This increasing single-view compression performance is related to increasing compression performance of multiview video coding. For multiview video with associated depth maps, additional significant bitrate reduction may be achieved. The paper reports the original compression technology that was designed and developed at Poznań University of Technology in response to MPEG Call for Proposals on 3D Video Coding Technology. The main idea of this technique is to predict very efficiently the side views and the depth maps from the base view.


picture coding symposium | 2015

A practical approach to acquisition and processing of free viewpoint video

Marek Domanski; Adrian Dziembowski; Dawid Mieloch; Adam Luczak; Olgierd Stankiewicz; Krzysztof Wegner

We deal with the processing of multiview video acquired by the use of practical thus relatively simple acquisition systems that have a limited number of cameras located around a scene on independent tripods. The real-camera locations are nearly arbitrary as it would be required in the real-world Free-Viewpoint Television systems. The appropriate test video sequences are also reported. We describe a family of original extensions and adaptations of the multiview video processing algorithms adapted to arbitrary camera positions around a scene. The techniques constitute the video processing chain for Free-Viewpoint Television as they are aimed at estimating the parameters of such a multi-camera system, video correction, depth estimation and virtual view synthesis. Moreover, we demonstrate the need for new compression technology capable of efficient compression of sparse convergent views. The experimental results for processing the proposed test sequences are reported.


digital systems design | 2010

Network-on-Multi-Chip (NoMC) for Multi-FPGA Multimedia Systems

Marta Stepniewska; Adam Luczak; Jakub Siast

Some applications, especially in the area of multimedia processing, need to be implemented in a multichip platform, due to their size. An efficient communication infrastructure for such systems may be designed with the use of the Networks-on-Chip (NoCs). However, a network for multi-chip systems require a scalable architecture. Moreover, for multimedia purposes, such NoC should support a multicast transmission mode. In order to meet this requirements, we propose the NoMC (Network-on-Multi-Chip) which is a hierarchical interconnect system, designed for multi-chip systems. A performance of the proposed network is assessed utilizing a model of the MVC (Multiview Video Coding) coder. In such system, the multicast transmission mode may yield an overall bandwidth gain up to 30%. Moreover, the synthesis results show that the proposed network elements are easily synthesizable for the FPGA devices.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014

Experiments on acquisition and processing of video for free-viewpoint television

Marek Domanski; Adrian Dziembowski; Agnieszka Kuehn; Maciej Kurc; Adam Luczak; Dawid Mieloch; Jakub Siast; Olgierd Stankiewicz; Krzysztof Wegner

The paper describes an experimental multiview video production, processing and delivery chain developed at Poznan University of Technology for research on free-viewpoint television. The multiview-video acquisition system consists of HD camera units with wireless synchronization, wireless control, video storage and power supply units. Therefore no cabling is needed in the system, which is important for shooting real-world events. The system is mostly used for nearly circular setup of cameras but the locations of cameras are arbitrary, and the procedures for system calibration and multiview video correction are consid-ered. The paper deals also with adoption for circular camera arrangement of the techniques implemented in Depth Estimation Reference Software and View Synthesis Reference Software.


international conference on multimedia and expo | 2002

Simple global model of an MPEG-2 bitstream

Marek Domanski; Adam Luczak

The paper describes a simple empirical model of bitstreams produced by MPEG-2 video coders. The number of bits that represent an I- or P-frame is expressed as a function of the quantization factor Q. This model is global in the sense that it describes the total number of bits in a frame thus ignoring local bitrate variation within a frame as well as changes of Q within a frame. This five-parameter model fits well to experimental data. Four of these parameters may be fixed thus obtaining a less accurate one-parameter model that can be used for coder control. The experimental results are reported for some video test sequences in the BT.601/4CIF format encoded by an MPEG-2 MP@ML coder.


international conference on multimedia and expo | 2016

New results in free-viewpoint television systems for horizontal virtual navigation

Marek Domanski; Maciej Bartkowiak; Adrian Dziembowski; Tomasz Grajek; Adam Grzelka; Adam Luczak; Dawid Mieloch; Jaroslaw Samelak; Olgierd Stankiewicz; Jakub Stankowski; Krzysztof Wegner

The paper presents the concept of a practical free-viewpoint television system with purely optical depth estimation. The system consists of camera modules that contain pairs or triples of cameras together with the respective microphones. The camera modules can be sparsely located in arbitrary positions around a scene. Each camera module is equivalent to a video camera with a depth sensor and microphones. The hardware requirements, the video and audio processing algorithms and the preliminary experimental results are reported. In particular, for such systems, a compression technique is discussed that is more efficient than the new 3D-HEVC technology. A set of new test sequences obtained with the use of camera pairs are presented.


international conference on multimedia and expo | 2002

Efficient hybrid video coders with spatial and temporal scalability

Marek Domanski; S. Makowiak; Lukasz Blaszak; Adam Luczak

The paper deals with an efficient coder structure being appropriate for scalable coding of video. The coder consists of two motion-compensated hybrid coders with independent motion estimation and compensation. The structure implements spatial scalability or mixed spatial and temporal scalability that can be combined with fine granular SNR scalability. The encoder exhibits extended capabilities of adaptation to network throughput. The H.263 video coding standard is used as a reference but the results are also applicable to the MPEG-2, MPEG-4 and H.26L systems with minor modifications. The coder exhibits a high level of compatibility with standard H.263 and MPEG 2/4 coders.


international conference on image processing | 2002

Fine granularity in multi-loop hybrid coders with multi-layer scalability

Marek Domanski; Slawomir Mackowiak; Lukasz Blaszak; Adam Luczak

The paper describes a generic multi-loop coder structure suitable for mixed spatial and temporal scalability combined with fine granular SNR scalability. The structure is suitable for various variants of hybrid video coders like MPEG-2, H.263 and H.26L. The idea of mixed spatial and temporal scalability i.e. spatio-temporal scalability is substantial for the proposal. Its application allows improving the scalable coding efficiency i.e. decreasing the scalability overhead. The coder consists of independently motion-compensated sub-coders that produce bitstreams corresponding to individual levels of spatio-temporal resolution. The bitrate can be smoothly matched to the particular channel bandwidth by use of data partitioning, which is related to drift errors in the decoder. Accumulation and propagation of these errors can be bounded by use of proper structures of groups of pictures.


international symposium on circuits and systems | 2000

Scalable MPEG video coding with improved B-frame prediction

Marek Domanski; Adam Luczak; Slawomir Mackowiak

Recently, there is a great interest in video codecs that implement the functionality of spatial scalability. Unfortunately, those MPEG-2 and MPEG-4 coders which exhibit such a functionality produce much more bits than corresponding single layer coders. This bitrate overhead can be reduced by application of spatio-temporal scalability as proposed by the authors. The base layer bitstream corresponds to pictures with reduced both spatial and temporal resolution while the enhancement layer bitstream is used to transmit the information needed to retrieve images with full spatial and temporal resolution. Full compatibility with the MPEG standards is ensured in the base layer where temporal resolution reduction is obtained by B-frame data partitioning, i.e. by placing each second frame (B-frame) in the enhancement layer only. Improved prediction of B-frames in the enhancement layer is proposed in this paper. The idea is to combine temporal forward and backward prediction with spatial interpolation. Experimental results prove a clear improvement of the MPEG-2-compatible scalable coding efficiency for the scheme proposed.

Collaboration


Dive into the Adam Luczak's collaboration.

Top Co-Authors

Avatar

Marek Domanski

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Slawomir Mackowiak

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Olgierd Stankiewicz

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jakub Siast

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Krzysztof Wegner

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomasz Grajek

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jakub Stankowski

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Maciej Kurc

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Marta Stepniewska

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Dziembowski

Poznań University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge