Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Prasanjit Panda is active.

Publication


Featured researches published by Prasanjit Panda.


international conference on image processing | 2007

Low-Drift Fixed-Point 8X8 IDCT Approximationwith 8-Bit Transform Factors

Yuriy Reznik; De Hsu; Prasanjit Panda; Brijesh Pillai

We describe an efficient algorithm for computing the inverse discrete cosine transform (IDCT) for image and video coding applications. This algorithm was derived by converting an 8-point IDCT factorization of C. Loeffler, A. Ligtenberg, and G. Moschytz into a scaled form, leaving 8 multiplications by irrational factors inside the transform. The key advantage of such a modification is that these factors can be sufficiently accurately represented by 8-bit integer values, resulting in a very small dynamic range of variables inside the transform. Our scaled ID transform can be implemented either by using 8 multiplications, 26 additions and 6 shifts or (in a multiplier-less fashion) by using only 44 additions and 18 shifts. This implementation fully complies with the new MPEG IDCT precision standard (ISO/IEC 23002-1, replacement of former IEEE 1180 specification), and shows remarkably low drift in decoding of H.263, MPEG-2, and MPEG-4 bitstreams produced by reference software encoders (employing 64-bit floating-point DCT and IDCT implementations).


IEEE Transactions on Circuits and Systems for Video Technology | 2018

In-Capture Mobile Video Distortions: A Study of Subjective Behavior and Objective Algorithms

Deepti Ghadiyaram; Janice Pan; Alan C. Bovik; Anush K. Moorthy; Prasanjit Panda; Kai-Chieh Yang

Digital videos often contain visual distortions that are introduced by the camera’s hardware or processing software during the capture process. These distortions often detract from a viewer’s quality of experience. Understanding how human observers perceive the visual quality of digital videos is of great importance to camera designers. Thus, the development of automatic objective methods that accurately quantify the impact of visual distortions on perception has greatly accelerated. Video quality algorithm design and verification require realistic databases of distorted videos and human judgments of them. However, most current publicly available video quality databases have been created under highly controlled conditions using graded, simulated, and post-capture distortions (such as jitter and compression artifacts) on high-quality videos. The commercial plethora of hand-held mobile video capture devices produces videos often afflicted by a variety of complex distortions generated during the capturing process. These in-capture distortions are not well-modeled by the synthetic, post-capture distortions found in existing VQA databases. Toward overcoming this limitation, we designed and created a new database that we call the LIVE-Qualcomm mobile in-capture video quality database, comprising a total of 208 videos, which model six common in-capture distortions. We also conducted a subjective quality assessment study using this database, in which each video was assessed by 39 unique subjects. Furthermore, we evaluated several top-performing no-reference IQA and VQA algorithms on the new database and studied how real-world in-capture distortions challenge both human viewers as well as automatic perceptual quality prediction models. The new database is freely available at: http://live.ece.utexas.edu/research/incaptureDatabase/index.html.


international conference on acoustics, speech, and signal processing | 2017

Subjective and objective quality assessment of Mobile Videos with In-Capture distortions

Deepti Ghadiyaram; Janice Pan; Alan C. Bovik; Anush K. Moorthy; Prasanjit Panda; Kai-Chieh Yang

We designed and created a new video database that models a variety of complex distortions generated during the video capturing process on hand-held mobile capturing devices. We describe the content and characteristics of the new database, which we call the LIVE Mobile In-Capture Video Quality Database. It comprises a total of 208 videos that were captured using eight different smart-phones and were affected by six common in-capture distortions. We also conducted a subjective video quality assessment study using this data, wherein each video was assessed by 36 unique subjects. We evaluated several top-performing No-Reference IQA and VQA algorithms on the new database and find insights on how real-world in-capture distortions challenge both human subjects as well as automatic perceptual quality prediction models.


Archive | 2005

Rate control techniques for video encoding using parametric equations

Prasanjit Panda


Archive | 2007

Adaptive filtering to enhance video encoder performance

Prasanjit Panda; Khaled Helmi El-Maleh; Hsiang-Tsun Li


Archive | 2006

Adaptive filtering to enhance video bit-rate control performance

Prasanjit Panda; Khaled Helmi El-Maleh; Hsiang-Tsun Li


Archive | 2013

Device and method for adaptive rate multimedia communications on a wireless network

Rahul Gopalan; Hyukjune Chung; Prasanjit Panda


Archive | 2013

SYSTEM AND METHOD FOR EFFICIENT POST-PROCESSING VIDEO STABILIZATION WITH CAMERA PATH LINEARIZATION

Kai Guo; Shu Xiao; Prasanjit Panda


Archive | 2017

Quality Assessment of Mobile Videos with In-Capture Distortions

Deepti Ghadiyaram; Janice Pan; Alan C. Bovik; Anush K. Moorthy; Prasanjit Panda; Yang; Kai-Chieh


Archive | 2017

METHODS AND SYSTEMS OF PERFORMING PREDICTIVE RANDOM ACCESS USING A BACKGROUND PICTURE

Ying Chen; Xuerui Zhang; Mayank Tiwari; Ning Bi; Prasanjit Panda

Collaboration


Dive into the Prasanjit Panda's collaboration.

Top Co-Authors

Avatar

Anush K. Moorthy

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas System

View shared research outputs
Top Co-Authors

Avatar

Deepti Ghadiyaram

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Janice Pan

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge