Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roman C. Kordasiewicz is active.

Publication


Featured researches published by Roman C. Kordasiewicz.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Affine Motion Prediction Based on Translational Motion Vectors

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

In all of the video coding standards like H.26X and MPEG-X, much of the compression comes from motion compensated prediction (MCP). Translational motion vectors (MVs) poorly model complex motion and thus coders using polynomial or affine MVs have been proposed in the past. In this paper, we demonstrate a novel affine predictor stage which can be easily incorporated into current codecs greatly increasing MCP quality. If used passively to generate the final prediction, gains of up to 0.7 and 1.6 dB were easily realized for ldquomobilerdquo and ldquoflower gardenrdquo video sequences, respectively. In addition, when the translational MVs are refined, gains of up to 0.98 and 1.88 dB for ldquomobilerdquo and ldquoflower gardenrdquo video sequences were respectively realized.


canadian conference on electrical and computer engineering | 2004

Hardware implementation of the optimized transform and quantization blocks of H.264

Roman C. Kordasiewicz; Shahram Shirani

H.264 also known as MPEG-4 part 10 or JVT, is a new video coding standard that is extremely efficient and is poised to appear in the next generation of HD-DVD players and recorders. This paper presents one of the first hardware architectures of the transform and quantization blocks, which are incorporated into a software/hardware system implemented on a Virtex II Pro FPGA. This implementation focuses on eliminating drift effects, multiply free and low gain transform, and reducing memory bandwidth. A large system on a programmable chip was developed. It uses a Power PC (PPC) to run a software program to optionally perform DCT and quantization in both the software and hardware. This paper presents DCT and quantization blocks that can process about 1500 Mpixel/s, and a system that can process about 0.8 Mpixel/s.


international conference on image processing | 2005

ASIC and FPGA implementations of H.264 DCT and quantization blocks

Roman C. Kordasiewicz; Shahram Shirani

In the search for ever better and faster video compression standards H.264 was created. With it arose the need for hardware acceleration of its very computationally intensive parts. To address this need, this paper proposes two sets of architectures for the integer discrete transform (DCT) and quantization blocks from H.264. The first set of architectures for the DCT and quantization were optimized for area, which resulted in transform and quantizer blocks that occupy 294 and 1749 gates respectively. The second set of speed optimized designs has a throughput anywhere from 11 to 2552 M pixels/s. All of the designs were synthesized for Xilinx Virtex 2-Pro and 0.18/spl mu/m TSMC CMOS technology, as well as the combined DCT and quantization blocks went through comprehensive place and route flow.


ieee global conference on signal and information processing | 2014

Study of the effects of stalling events on the quality of experience of mobile streaming videos

Deepti Ghadiyaram; Alan C. Bovik; Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant

We have created a new mobile video database that models distortions caused by network impairments. In particular, we simulate stalling events and startup delays in over-the-top (OTT) mobile streaming videos. We describe the way we simulated diverse stalling events to create a corpus of distorted videos and the human study we conducted to obtain subjective scores. We also analyzed the ratings to understand the impact of several factors that influence the quality of experience (QoE). To the best of our knowledge, ours is the most comprehensive and diverse study on the effects of stalling events on QoE. We are making the database publicly available [1] in order to help advance state-of-the-art research on user-centric mobile network planning and management.


international conference on image processing | 2014

Delivery quality score model for Internet video

Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant; Deepti Ghadiyaram; Alan C. Bovik

The vast majority of todays internet video services are consumed over-the-top (OTT) via reliable streaming (HTTP via TCP), where the primary noticeable delivery-related impairments are startup delay and stalling. In this paper we introduce an objective model called the delivery quality score (DQS) model, to predict users QoE in the presence of such impairments. We describe a large subjective study that we carried out to tune and validate this model. Our experiments demonstrate that the DQS model correlates highly with the subjective data and that it outperforms other emerging models.


IEEE Transactions on Multimedia | 2007

Encoding of Affine Motion Vectors

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

An affine motion model provides better motion representation than a translational motion model. Therefore, it is a good candidate for advanced video compression algorithms, requiring higher compression efficiency than current algorithms. One disadvantage of the affine motion model is the increased number of motion vector parameters, therefore increased motion vector bit rate. We develop and analyze several simulation based approaches of entropy coding for orthonormalized affine motion vector (AMV) coefficients, by considering various context-types and coders. In our work we expand the traditional idea of a context type by introducing four new context types. We compare our method of contexts-type and coder selection with context quantization. The best of our contexts-type and coder solutions produces 4% to 15% average AMV bit-rate reductions over the original VLC approach. For more difficult content AMV bit rate reduction up to 26% is reported.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Modeling Quantization of Affine Motion Vector Coefficients

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

Affine motion compensated prediction (AMCP) is an advanced tool which may be incorporated into future video compression standards. There are numerous coders already using AMCP . However the increased number of motion vector components is a disadvantage and quantizing these components can have significant consequences on the difference macro blocks (DMBs). This paper examines the quantization of affine motion vector (AMV) coefficients, by deriving a quadratic relationship between DMB energy and AMV quantization step size. Mathematical derivations and simulations are provided, including two literature comparisons demonstrating the benefits of this work. In the first comparison, the quantization of orthogonalized AMVs in is compared with quantization guided by the novel quadratic model. In the second comparison, Nokias MVC coder is modified to use the quadratic model to generate quantization step sizes for various granularities; sequence, frame, and quarter-frame, demonstrating up to 8.7% bit rate reductions. Model driven AMV quantization step size choices are shown to be very close to and even outperform limited exhaustive search AMV quantization step size choices, at a quarter of the computational cost


international conference on acoustics, speech, and signal processing | 2007

Affine Prediction as a Post Processing Stage

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

Translational motion vectors (MV)s and macro block (MB) frame partitioning are the predominant means of motion estimation (ME) and motion compensation (MC). However, the translational motion model does not describe sufficiently complex motion such as rotation, zoom or shearing. To remedy this one can start computing more advanced motion parameters and/or partition the frame differently. However these approaches are either very computationally expensive and/or have limited search ranges. Thus, in this paper we propose a novel post processing stage which can be easily incorporated into most of the current coders. This stage generates the predictor for each inter MB, based on an affine motion model using translational motion vectors. Our approach has very low computational complexity, however average PSNR gains of up to 0.6 dB were realized for video sequences with complex motion.


international conference on image processing | 2005

Modelling the effect of quantizing affine motion vectors on rate and energy of difference macroblocks

Roman C. Kordasiewicz; Shahram Shirani; Michael Gallant

This paper derives the relationship between the energy of difference macroblock and affine motion vector quantization step size. This is an important step in the analysis of advanced motion models for future video coding methods, as it facilitates rate optimization for affine motion vectors. The derived model shows that the difference macroblock energy has a squared relationship with the affine motion vector quantization step size. In addition, experimental results are shown validating this result and providing further insight.


Archive | 2011

System for monitoring a video network and methods for use therewith

Michael Gallant; Michael Archer; Kevin Goertz; Anthony Peter Joch; Roman C. Kordasiewicz

Collaboration


Dive into the Roman C. Kordasiewicz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Deepti Ghadiyaram

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge