Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuriy A. Reznik is active.

Publication


Featured researches published by Yuriy A. Reznik.


IEEE Transactions on Circuits and Systems for Video Technology | 2001

Video coding for streaming media delivery on the Internet

Gregory J. Conklin; Gary S. Greenbaum; Karl Olav Lillevold; Alan F. Lippman; Yuriy A. Reznik

We provide an overview of an architecture of todays Internet streaming media delivery networks and describe various problems that such systems pose with regard to video coding. We demonstrate that based on the distribution model (live or on-demand), the type of the network delivery mechanism (unicast versus multicast), and optimization criteria associated with particular segments of the network (e.g., minimization of distortion for a given connection rate, minimization of traffic in the dedicated delivery network, etc.), it is possible to identify several models of communication that may require different treatment from both source and channel coding perspectives. We explain how some of these problems can be addressed using a conventional framework of temporal motion-compensated, transform-based video compression algorithm, supported by appropriate channel-adaptation mechanisms in client and server components of a streaming media system. Most of these techniques have already been implemented in RealNetworks(R) RealSystem(R) 8 and its RealVideo(R) 8 codec, which we use throughout the paper to illustrate our results.


data compression conference | 2004

MPEG-4 ALS: an emerging standard for lossless audio coding

Tilman Liebchen; Yuriy A. Reznik

This paper provides a brief overview of an emerging ISO/IEC standard for lossless audio coding, MPEG-4 ALS and explains the choice of algorithms used in its design, and compare it to current state-of-the-art algorithms for lossless audio compression.


picture coding symposium | 2013

Perceptual pre-processing filter for user-adaptive coding and delivery of visual information

Rahul Vanam; Yuriy A. Reznik

We describe design of an adaptive video delivery system employing a perceptual preprocessing filter. Such filter receives parameters of the reproduction setup, such as viewing distance, pixel density, ambient illuminance, etc. It subsequently applies a contrast sensitivity model of human vision to remove spatial oscillations that are invisible under such conditions. By removing such oscillations the filter simplifies the video content, therefore leading to more efficient encoding without causing any visible alterations of the content. Through experiments, we demonstrate that the use of our filter can yield significant bit rate savings compared to conventional encoding methods that are not tailored to specific viewing conditions.


data compression conference | 2013

Fast Transforms for Intra-prediction-based Image and Video Coding

Ankur Saxena; Felix C. A. Fernandes; Yuriy A. Reznik

In this paper, we provide an overview of the DCT/DST transform scheme for intra coding in the HEVC standard. A unique feature of this scheme is the use of DST-VII transforms in addition to DCT-II. We further derive factorizations for fast joint computation of DCT-II and DST-VII transforms of several sizes. Simulation results for the DCT/DST scheme in the HM reference software for HEVC are also provided together with a discussion on computational complexity.


2013 20th International Packet Video Workshop | 2013

Motion Compensated Error Concealment for HEVC Based on Block-Merging and Residual Energy

Yueh-Lun Chang; Yuriy A. Reznik; Zhifeng Chen; Pamela C. Cosman

We propose a motion-compensated error concealment method for HEVC and implement the method in reference software HM. The motion vector from the co-located block will be refined for motion compensation. Based on the reliability of these MVs, blocks will be merged and assigned with new MVs. The experimental results show both a substantial PSNR gain as well as an improvement in visual quality.


ieee global conference on signal and information processing | 2013

Improving coding and delivery of video by exploiting the oblique effect

Yuriy A. Reznik; Rahul Vanam

Oblique effect implies lower visual sensitivity to diagonally oriented spatial oscillations as opposed to horizontal and vertical ones. To exploit this phenomenon we propose to use an adaptive anisotropic low-pass filter applied to video prior to encoding. We then describe design of such a filter. Through experiments, we demonstrate that the use of this filter can yield appreciable bitrate savings compared to conventional filtering and encoding of the same content.


data compression conference | 2013

Improving the Efficiency of Video Coding by Using Perceptual Preprocessing Filter

Rahul Vanam; Yuriy A. Reznik

We describe the design of a perceptual preprocessing filter for improving the effectiveness of video coding. This filter uses known parameters of the reproduction setup, such as viewing distance, pixel density, and contrast ratio of the screen, as well as a contrast sensitivity model of human vision to identify spatial oscillations that are invisible. By removing such oscillations the filter simplifies the video content, therefore leading to more efficient encoding without causing any visible alterations of the content. Through experiments, we demonstrate the use of our filter can yield significant bit rate savings compared to conventional encoding methods that are not tailored to specific viewing conditions.


visual communications and image processing | 2012

User-adaptive mobile video streaming

Yuriy A. Reznik; Ed Asbun; Zhifeng Chen; Yan Ye; Eldad Zeira; Rahul Vanam; Zheng Yuan; Gregory S. Sternberg; Ariela Zeira; Naresh Soni

Summary form only given. We describe the design of a mobile streaming system, which optimizes video delivery based on dynamic analysis of user behavior and viewing conditions, including user proximity, viewing angle, and ambient illuminance.


international conference on image processing | 2014

Perceptual pre-processing filter for adaptive video on demand content delivery

Rahul Vanam; Louis Kerofsky; Yuriy A. Reznik

We describe the use of perceptual pre-processing to reduce the bitrate needed for delivery of Video on Demand (VOD) content. The proposed system exploits estimated viewing conditions to remove image oscillations which are not visible to the user under the specific viewing conditions. The pre-processing uses parameters such as: viewing distance, pixel density, ambient illumination, etc. A model of human visual system contrast sensitivity is used to remove oscillations in the image data which cannot be seen and need not be encoded. Experiments demonstrate significant bitrate savings compared to conventional encoding methods which do not exploit viewing conditions.


international conference on acoustics, speech, and signal processing | 2013

Relationship between DCT-II, DCT-VI, and DST-VII transforms

Yuriy A. Reznik

Discrete Sine Transforms of type VII (DST-VII) have recently received considerable interest in video coding. In this paper, we show that there exists a direct connection between DST-VII and DCT-II transforms, allowing their joint computation for certain transform sizes. This connection also yields fast algorithms for constructing DCT-VI and DCT-VII.

Collaboration


Dive into the Yuriy A. Reznik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge