Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalpana Seshadrinathan is active.

Publication


Featured researches published by Kalpana Seshadrinathan.


IEEE Transactions on Image Processing | 2010

Study of Subjective and Objective Quality Assessment of Video

Kalpana Seshadrinathan; Rajiv Soundararajan; Alan C. Bovik; Lawrence K. Cormack

We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.


IEEE Transactions on Image Processing | 2010

Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos

Kalpana Seshadrinathan; Alan C. Bovik

There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of video quality assessment (VQA) be available that can assist in controlling the quality of video being delivered to this critical audience. Naturally, the quality of motion representation in videos plays an important role in the perception of video quality, yet existing VQA algorithms make little direct use of motion information, thus limiting their effectiveness. We seek to ameliorate this by developing a general, spatio-spectrally localized multiscale framework for evaluating dynamic video fidelity that integrates both spatial and temporal (and spatio-temporal) aspects of distortion assessment. Video quality is evaluated not only in space and time, but also in space-time, by evaluating motion quality along computed motion trajectories. Using this framework, we develop a full reference VQA algorithm for which we coin the term the MOtion-based Video Integrity Evaluation index, or MOVIE index. It is found that the MOVIE index delivers VQA scores that correlate quite closely with human subjective judgment, using the Video Quality Expert Group (VQEG) FRTV Phase 1 database as a test bed. Indeed, the MOVIE index is found to be quite competitive with, and even outperform, algorithms developed and submitted to the VQEG FRTV Phase 1 study, as well as more recent VQA algorithms tested on this database.


Proceedings of SPIE | 2010

A subjective study to evaluate video quality assessment algorithms

Kalpana Seshadrinathan; Rajiv Soundararajan; Alan C. Bovik; Lawrence K. Cormack

Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a subjective study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available subjective study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were subjectively evaluated by 38 human observers. Our study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Wireless Video Quality Assessment: A Study of Subjective Scores and Objective Algorithms

Anush K. Moorthy; Kalpana Seshadrinathan; Rajiv Soundararajan; Alan C. Bovik

Evaluating the perceptual quality of video is of tremendous importance in the design and optimization of wireless video processing and transmission systems. In an endeavor to emulate human perception of quality, various objective video quality assessment (VQA) algorithms have been developed. However, the only subjective video quality database that exists on which these algorithms can be tested is dated and does not accurately reflect distortions introduced by present generation encoders and/or wireless channels. In order to evaluate the performance of VQA algorithms for the specific task of H.264 advanced video coding compressed video transmission over wireless networks, we conducted a subjective study involving 160 distorted videos. Various leading full reference VQA algorithms were tested for their correlation with human perception. The data from the paper has been made available to the research community, so that further research on new VQA algorithms and on the general area of VQA may be carried out.


international conference on acoustics, speech, and signal processing | 2007

A Structural Similarity Metric for Video Based on Motion Models

Kalpana Seshadrinathan; Alan C. Bovik

Quality assessment plays a very important role in almost all aspects of multimedia signal processing such as acquisition, coding, display, processing etc. Several objective quality metrics have been proposed for images, but video quality assessment has received relatively little attention and most video quality metrics have been simple extension of metrics for images. In this paper, we propose a novel quality metric for video sequences that utilizes motion information in video sequences, which is the main difference in moving from images to video. This metric is capable of capturing temporal artifacts in video sequences in addition to spatial distortions. Results are presented that demonstrate the efficacy of our quality metric by comparing model performance against subjective scores on the database developed by the video quality experts group.


IEEE Transactions on Image Processing | 2013

Video Quality Pooling Adaptive to Perceptual Distortion Severity

Jincheol Park; Kalpana Seshadrinathan; Sanghoon Lee; Alan C. Bovik

It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes “worst” scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.


international conference on image processing | 2008

Unifying analysis of full reference image quality assessment

Kalpana Seshadrinathan; Alan C. Bovik

This paper studies two increasingly popular paradigms for image quality assessment - Structural SIMilarity (SSIM) metrics and Information Fidelity metrics. The relation of the SSIM metric to Mean Squared Error and Human Visual System (HVS) based models of quality assessment are studied. The SSIM model is shown to be equivalent to models of contrast gain control of the HVS. We study the information theoretic metrics and show that the Information Fidelity Criterion (IFC) is a monotonic function of the structure term of the SSIM index applied in the sub-band filtered domain. Our analysis of the Visual Information Fidelity (VIF) criterion shows that improvements in VIF include incorporation of a contrast comparison, in addition to the structure comparison in IFC. Our analysis attempts to unify quality metrics derived from different first principles and characterize the relative performance of different QA systems.


international conference on acoustics, speech, and signal processing | 2011

Temporal hysteresis model of time varying subjective video quality

Kalpana Seshadrinathan; Alan C. Bovik

Video quality assessment (QA) continues to be an important area of research due to the overwhelming number of applications where videos are delivered to humans. In particular, the problem of temporal pooling of quality sores has received relatively little attention. We observe a hysteresis effect in the subjective judgment of time-varying video quality based on measured behavior in a subjective study. Based on our analysis of the subjective data, we propose a hysteresis temporal pooling strategy for QA algorithms. Applying this temporal strategy to pool scores from PSNR, SSIM [1] and MOVIE [2] produces markedly improved subjective quality prediction.


electronic imaging | 2009

Motion-based perceptual quality assessment of video

Kalpana Seshadrinathan; Alan C. Bovik

There is a great deal of interest in methods to assess the perceptual quality of a video sequence in a full reference framework. Motion plays an important role in human perception of video and videos suffer from several artifacts that have to deal with inaccuracies in the representation of motion in the test video compared to the reference. However, existing algorithms to measure video quality focus primarily on capturing spatial artifacts in the video signal, and are inadequate at modeling motion perception and capturing temporal artifacts in videos. We present an objective, full reference video quality index known as the MOtion-based Video Integrity Evaluation (MOVIE) index that integrates both spatial and temporal aspects of distortion assessment. MOVIE explicitly uses motion information from the reference video and evaluates the quality of the test video along the motion trajectories of the reference video. The performance of MOVIE is evaluated using the VQEG FR-TV Phase I dataset and MOVIE is shown to be competitive with, and even out-perform, existing video quality assessment systems.


Multimedia Tools and Applications | 2011

Automatic prediction of perceptual quality of multimedia signals--a survey

Kalpana Seshadrinathan; Alan C. Bovik

We survey recent developments in multimedia signal quality assessment, including image, audio, video, and combined signals. Such an overview is timely given the recent explosion in all-digital sensory entertainment and communication devices pervading the consumer space. Owing to the sensory nature of these signals, perceptual models lie at the heart of multimedia signal quality assessment algorithms. We survey these models and recent competitive algorithms and discuss comparison studies that others have conducted. In this context we also describe existing signal quality assessment databases. We envision that the reader will gain a firmer understanding of the broad topic of multimedia quality assessment, of the various sub-disciplines corresponding to different signal types, how these signals types co-relate in producing an overall user experience, and what directions of research remain to be pursued.

Collaboration


Dive into the Kalpana Seshadrinathan's collaboration.

Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajiv Soundararajan

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge