Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michele A. Saad is active.

Publication


Featured researches published by Michele A. Saad.


IEEE Transactions on Image Processing | 2012

Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain

Michele A. Saad; Alan C. Bovik; Christophe Charrier

We develop an efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.


IEEE Signal Processing Letters | 2010

A DCT Statistics-Based Blind Image Quality Index

Michele A. Saad; Alan C. Bovik; Christophe Charrier

The development of general-purpose no-reference approaches to image quality assessment still lags recent advances in full-reference methods. Additionally, most no-reference or blind approaches are distortion-specific, meaning they assess only a specific type of distortion assumed present in the test image (such as blockiness, blur, or ringing). This limits their application domain. Other approaches rely on training a machine learning algorithm. These methods however, are only as effective as the features used to train their learning machines. Towards ameliorating this we introduce the BLIINDS index (BLind Image Integrity Notator using DCT Statistics) which is a no-reference approach to image quality assessment that does not assume a specific type of distortion of the image. It is based on predicting image quality based on observing the statistics of local discrete cosine transform coefficients, and it requires only minimal training. The method is shown to correlate highly with human perception of quality.


IEEE Transactions on Image Processing | 2014

Blind Prediction of Natural Video Quality

Michele A. Saad; Alan C. Bovik; Christophe Charrier

We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.


international conference on image processing | 2011

DCT statistics model-based blind image quality assessment

Michele A. Saad; Alan C. Bovik; Christophe Charrier

We propose an efficient, general-purpose, distortion-agnostic, blind/no-reference image quality assessment (NR-IQA) algorithm based on a natural scene statistics model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. We propose a generalized parametric model of the extracted DCT coefficients. The parameters of the model are utilized to predict image quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human visual perception of quality, at a level that is even competitive with the powerful full-reference SSIM index.


IEEE Transactions on Image Processing | 2016

A Completely Blind Video Integrity Oracle

Anish Mittal; Michele A. Saad; Alan C. Bovik

Considerable progress has been made toward developing still picture perceptual quality analyzers that do not require any reference picture and that are not trained on human opinion scores of distorted images. However, there do not yet exist any such completely blind video quality assessment (VQA) models. Here, we attempt to bridge this gap by developing a new VQA model called the video intrinsic integrity and distortion evaluation oracle (VIIDEO). The new model does not require the use of any additional information other than the video being quality evaluated. VIIDEO embodies models of intrinsic statistical regularities that are observed in natural vidoes, which are used to quantify disturbances introduced due to distortions. An algorithm derived from the VIIDEO model is thereby able to predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions, or human judgments of video quality. Even with such a paucity of information, we are able to show that the VIIDEO algorithm performs much better than the legacy full reference quality measure MSE on the LIVE VQA database and delivers performance comparable with a leading human judgment trained blind VQA model. We believe that the VIIDEO algorithm is a significant step toward making real-time monitoring of completely blind video quality possible.


asilomar conference on signals, systems and computers | 2012

Blind quality assessment of videos using a model of natural scene statistics and motion coherency

Michele A. Saad; Alan C. Bovik

We propose a no-reference algorithm for video quality evaluation. The algorithm relies on a natural scene statistics (NSS) model of video DCT coefficients as well as a temporal model of motion coherency. The proposed framework is tested on the LIVE VQA database, and shown to correlate well with human visual judgments of quality.


conference on multimedia modeling | 2012

Towards category-based aesthetic models of photographs

Pere Obrador; Michele A. Saad; Poonam Suryanarayan; Nuria Oliver

We present a novel data-driven category-based approach to automatically assess the aesthetic appeal of photographs. In order to tackle this problem, a novel set of image segmentation methods based on feature contrast are introduced, such that luminance , sharpness , saliency , color chroma , and a measure of region relevance are computed to generate different image partitions. Image aesthetic features are computed on these regions (e.g. sharpness , colorfulness , and a novel set of light exposure features). In addition, color harmony , image simplicity , and a novel set of image composition features are measured on the overall image. Support Vector Regression models are generated for each of 7 popular image categories: animals , architecture , cityscape , floral , landscape , portraiture and seascapes . These models are analyzed to understand which features have greater influence in each of those categories, and how they perform with respect to a generic state of the art model.


IEEE Signal Processing Letters | 2015

Objective Consumer Device Photo Quality Evaluation

Michele A. Saad; Philip J. Corriveau; Ramesh Jaladi

We propose an approach to no-reference image quality evaluation that is consumer-centric and targets real consumer-type images with realistic distortions and realistic quality ranges. We show that the state-of-the-art approaches to no-reference image quality assessment do not perform as well on real consumer-type images, and propose an approach which is simple, efficient, easily interpretable, and achieves high prediction performance on a dataset of images with real non-simulated distortions.


quality of multimedia experience | 2009

Natural motion statistics for no-reference video quality assessment

Michele A. Saad; Alan C. Bovik

We model the motion statistics of video sequences, towards the development of no-reference video quality indices that take into account spatial as well as temporal characteristics of video signals. Here we explore the temporal characteristics of undistorted as well as distorted IP video sequences; (distorted by varying levels of packet loss rate) as extracted from optical flow vectors. We present an algorithm for extracting motion statistics by computing independent components (ICs) from the optical flow field. We then model the extracted ICs, and show that they are more closely Laplacian distributed than the entire nondecomposed features. We also observe that the lower the video quality, the higher the root-mean-square (RMS) error difference between the maximum-likelihood Laplacian fits of the two extracted ICs of the flow vectors.


2015 Colour and Visual Computing Symposium (CVCS) | 2015

Impact of camera pixel count and monitor resolution perceptual image quality

Michele A. Saad; Margaret H. Pinson; David G. Nicholas; Niels Van Kets; Glenn Van Wallendael; Ralston Da Silva; Ramesh Jaladi; Philip J. Corriveau

Traditional 35mm film cameras are no longer the main devices todays consumers use to capture images. Though the dominant technology has shifted to digital cameras and displays that differ widely in pixel count and resolution, our understanding of the quality impact of these variables lags. This paper explores the quality impact of resolution within this new paradigm. Images were collected from 23 cameras, ranging from a 1 megapixel (MP) mobile phone to a 20 MP digital single-lens reflex camera (DSLR). Subjective ratings from three labs were used to explore the relationship between the cameras pixel count, the display resolution, and the overall perceived quality. This dataset and subjective ratings will be made available on the Consumer Digital Video Library (CDVL, www.cdvl.org) when this paper is published. These images can be used royalty free for research and development purposes.

Collaboration


Dive into the Michele A. Saad's collaboration.

Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Anish Mittal

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge