Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepti Ghadiyaram is active.

Publication


Featured researches published by Deepti Ghadiyaram.


ieee global conference on signal and information processing | 2014

Blind image quality assessment on real distorted images using deep belief nets

Deepti Ghadiyaram; Alan C. Bovik

We present a novel natural-scene-statistics-based blind image quality assessment model that is created by training a deep belief net to discover good feature representations that are used to learn a regressor for quality prediction. The proposed deep model has an unsupervised pre-training stage followed by a supervised fine-tuning stage, enabling it to generalize over different distortion types, mixtures, and severities. We evaluated our new model on a recently created database of images afflicted by real distortions, and show that it outperforms current state-of-the-art blind image quality prediction models.


ieee global conference on signal and information processing | 2014

Study of the effects of stalling events on the quality of experience of mobile streaming videos

Deepti Ghadiyaram; Alan C. Bovik; Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant

We have created a new mobile video database that models distortions caused by network impairments. In particular, we simulate stalling events and startup delays in over-the-top (OTT) mobile streaming videos. We describe the way we simulated diverse stalling events to create a corpus of distorted videos and the human study we conducted to obtain subjective scores. We also analyzed the ratings to understand the impact of several factors that influence the quality of experience (QoE). To the best of our knowledge, ours is the most comprehensive and diverse study on the effects of stalling events on QoE. We are making the database publicly available [1] in order to help advance state-of-the-art research on user-centric mobile network planning and management.


Journal of Vision | 2017

Perceptual quality prediction on authentically distorted images using a bag of features approach

Deepti Ghadiyaram; Alan C. Bovik

Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models.


asilomar conference on signals, systems and computers | 2014

Crowdsourced study of subjective image quality

Deepti Ghadiyaram; Alan C. Bovik

We designed and created a new image quality database that models diverse authentic image distortions and artifacts that affect images that are captured using modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we are using to conduct a very large-scale, on-going, multi-month image quality assessment (IQA) subjective study, wherein a wide range of diverse observers record their judgments of image quality. Our database currently consists of over 320,000 opinion scores on 1,163 authentically distorted images evaluated by over 7000 human observers. The new database will soon be made freely available for download and we envision that the fruits of our efforts will provide researchers with a valuable tool to benchmark and improve the performance of objective IQA algorithms.


electronic imaging | 2015

Feature maps-driven no-reference image quality prediction of authentically distorted images

Deepti Ghadiyaram; Alan C. Bovik

Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an images perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.


international conference on image processing | 2014

Delivery quality score model for Internet video

Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant; Deepti Ghadiyaram; Alan C. Bovik

The vast majority of todays internet video services are consumed over-the-top (OTT) via reliable streaming (HTTP via TCP), where the primary noticeable delivery-related impairments are startup delay and stalling. In this paper we introduce an objective model called the delivery quality score (DQS) model, to predict users QoE in the presence of such impairments. We describe a large subjective study that we carried out to tune and validate this model. Our experiments demonstrate that the DQS model correlates highly with the subjective data and that it outperforms other emerging models.


IEEE Transactions on Image Processing | 2017

No-Reference Quality Assessment of Tone-Mapped HDR Pictures

Debarati Kundu; Deepti Ghadiyaram; Alan C. Bovik; Brian L. Evans

Being able to automatically predict digital picture quality, as perceived by human observers, has become important in many applications where humans are the ultimate consumers of displayed visual information. Standard dynamic range (SDR) images provide 8 b/color/pixel. High dynamic range (HDR) images, which are usually created from multiple exposures of the same scene, can provide 16 or 32 b/color/pixel, but must be tonemapped to SDR for display on standard monitors. Multi-exposure fusion techniques bypass HDR creation, by fusing the exposure stack directly to SDR format while aiming for aesthetically pleasing luminance and color distributions. Here, we describe a new no-reference image quality assessment (NR IQA) model for HDR pictures that is based on standard measurements of the bandpass and on newly conceived differential natural scene statistics (NSS) of HDR pictures. We derive an algorithm from the model which we call the HDR IMAGE GRADient-based Evaluator. NSS models have previously been used to devise NR IQA models that effectively predict the subjective quality of SDR images, but they perform significantly worse on tonemapped HDR content. Toward ameliorating this we make here the following contributions: 1) we design HDR picture NR IQA models and algorithms using both standard space-domain NSS features as well as novel HDR-specific gradient-based features that significantly elevate prediction performance; 2) we validate the proposed models on a large-scale crowdsourced HDR image database; and 3) we demonstrate that the proposed models also perform well on legacy natural SDR images. The software is available at: http://live.ece.utexas.edu/research/Quality/higradeRelease.zip.


IEEE Transactions on Image Processing | 2017

Large-Scale Crowdsourced Study for Tone-Mapped HDR Pictures

Debarati Kundu; Deepti Ghadiyaram; Alan C. Bovik; Brian L. Evans

Measuring digital picture quality, as perceived by human observers, is increasingly important in many applications in which humans are the ultimate consumers of visual information. Standard dynamic range (SDR) images provide 8 b/color/pixel. High dynamic range (HDR) images, usually created from multiple exposures of the same scene, can provide 16 or 32 b/color/pixel, but need to be tonemapped to SDR for display on standard monitors. Multiexposure fusion (MEF) techniques bypass HDR creation by fusing an exposure stack directly to SDR images to achieve aesthetically pleasing luminance and color distributions. Many HDR and MEF databases have a relatively small number of images and human opinion scores, obtained under stringently controlled conditions, thereby limiting realistic viewing. Moreover, many of these databases are intended to compare tone-mapping algorithms, rather than being specialized for developing and comparing image quality assessment models. To overcome these challenges, we conducted a massively crowdsourced online subjective study. The primary contributions described in this paper are: 1) the new ESPL-LIVE HDR Image Database that we created containing diverse images obtained by tone-mapping operators and MEF algorithms, with and without post-processing; 2) a large-scale subjective study that we conducted using a crowdsourced platform to gather more than 300 000 opinion scores on 1811 images from over 5000 unique observers; and 3) a detailed study of the correlation performance of the state-of-the-art no-reference image quality assessment algorithms against human opinion scores of these images. The database is available at http://signal.ece.utexas.edu/%7Edebarati/HDRDatabase.zip.


asilomar conference on signals, systems and computers | 2016

No-reference image quality assessment for high dynamic range images

Debarati Kundu; Deepti Ghadiyaram; Alan C. Bovik; Brian L. Evans

Being able to automatically predict digital picture quality, as perceived by human observers, has become important in many applications where humans are the ultimate consumers of displayed visual information. Standard dynamic range (SDR) images provide 8 bits/color/pixel. High dynamic range (HDR) images which are usually created from multiple exposures of the same scene, can provide 16 or 32 bits/color/pixel, but must be tonemapped to SDR for display on standard monitors. Multi-exposure fusion (MEF) techniques bypass HDR creation, by fusing the exposure stack directly to SDR format while aiming for aesthetically pleasing luminance and color distributions. Here we describe a new no-reference image quality assessment (NR IQA) model for HDR pictures that is based on standard measurements of the bandpass and on newly-conceived differential natural scene statistics (NSS) of HDR pictures. We derive an algorithm from the model which we call the Gradient Image Quality Assessment algorithm (G-IQA). NSS models have previously been used to devise NR IQA models that effectively predict the subjective quality of SDR images, but they perform significantly worse on tonemapped HDR content. Towards ameliorating this we make here the following contributions: (1) We design a HDR picture NR IQA model and algorithm using both standard space-domain NSS features as well as novel HDR-specific gradient based features that significantly elevate prediction performance, (2) We validate the proposed models on a large-scale crowdsourced HDR image database, and (3) We demonstrate that the proposed model also perform well on legacy natural SDR images. The software is available at: http://signal.ece.utexas.edu/%7Edebarati/higradeRelease.zip.


IEEE Transactions on Circuits and Systems for Video Technology | 2018

In-Capture Mobile Video Distortions: A Study of Subjective Behavior and Objective Algorithms

Deepti Ghadiyaram; Janice Pan; Alan C. Bovik; Anush K. Moorthy; Prasanjit Panda; Kai-Chieh Yang

Digital videos often contain visual distortions that are introduced by the camera’s hardware or processing software during the capture process. These distortions often detract from a viewer’s quality of experience. Understanding how human observers perceive the visual quality of digital videos is of great importance to camera designers. Thus, the development of automatic objective methods that accurately quantify the impact of visual distortions on perception has greatly accelerated. Video quality algorithm design and verification require realistic databases of distorted videos and human judgments of them. However, most current publicly available video quality databases have been created under highly controlled conditions using graded, simulated, and post-capture distortions (such as jitter and compression artifacts) on high-quality videos. The commercial plethora of hand-held mobile video capture devices produces videos often afflicted by a variety of complex distortions generated during the capturing process. These in-capture distortions are not well-modeled by the synthetic, post-capture distortions found in existing VQA databases. Toward overcoming this limitation, we designed and created a new database that we call the LIVE-Qualcomm mobile in-capture video quality database, comprising a total of 208 videos, which model six common in-capture distortions. We also conducted a subjective quality assessment study using this database, in which each video was assessed by 39 unique subjects. Furthermore, we evaluated several top-performing no-reference IQA and VQA algorithms on the new database and studied how real-world in-capture distortions challenge both human viewers as well as automatic perceptual quality prediction models. The new database is freely available at: http://live.ece.utexas.edu/research/incaptureDatabase/index.html.

Collaboration


Dive into the Deepti Ghadiyaram's collaboration.

Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Janice Pan

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Anush K. Moorthy

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Brian L. Evans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge