Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiuping Jiang is active.

Publication


Featured researches published by Qiuping Jiang.


Journal of Electronic Imaging | 2015

Three-dimensional visual comfort assessment via preference learning

Qiuping Jiang; Feng Shao; Gangyi Jiang; Mei Yu; Zongju Peng

Abstract. Three-dimensional (3-D) visual comfort assessment (VCA) is a particularly important and challenging topic, which involves automatically predicting the degree of visual comfort in line with human subjective judgment. State-of-the-art VCA models typically focus on minimizing the distance between predicted visual comfort scores and subjective mean opinion scores (MOSs) by training a regression model. However, obtaining precise MOSs is often expensive and time-consuming, which greatly constrains the extension of existing MOS-aware VCA models. This study is inspired by the fact that humans tend to conduct a preference judgment between two stereoscopic images in terms of visual comfort. We propose to train a robust VCA model on a set of preference labels instead of MOSs. The preference label, representing the relative visual comfort of preference stereoscopic image pairs (PSIPs), is generally precise and can be obtained at much lower cost compared with MOS. More specifically, some representative stereoscopic images are first selected to generate the PSIP training set. Then, we use a support vector machine to learn a preference classification model by taking a differential feature vector and the corresponding preference label of each PSIP as input. Finally, given a testing sample, by considering a full-round paired comparison with all the selected representative stereoscopic images, the visual comfort score can be estimated via a simple linear mapping strategy. Experimental results on our newly built 3-D image database demonstrate that the proposed method can achieve a better performance compared with the models trained on MOSs.


Journal of Visual Communication and Image Representation | 2015

Supervised dictionary learning for blind image quality assessment using quality-constraint sparse coding

Qiuping Jiang; Feng Shao; Gangyi Jiang; Mei Yu; Zongju Peng

In this paper, we propose a supervised dictionary learning framework for blind image quality assessment (BIQA) by using quality-constraint sparse coding. Different with the traditional dictionary learning framework which only ensures the learnt dictionary accounting for image features, we add a quality-related regularization term in the framework to learn a feature-related dictionary and a quality-related dictionary jointly. Specifically, the feature-related and quality-related dictionaries share the same sparse coefficients, so that the reconstruction errors form the image feature vectors and quality score vectors are both minimized. Once the feature-related and quality-related dictionaries are learned, given a testing sample, we first abstract its feature vector and then compute the corresponding sparse coefficients w.r.t. the learnt feature-related dictionary, its quality score can be directly reconstructed based on the learnt quality-related dictionary and the estimated sparse coefficients. Experiment results on three publicly available IQA databases show the promising performance of the proposed model.


IEEE Signal Processing Letters | 2016

On Predicting Visual Comfort of Stereoscopic Images: A Learning to Rank Based Approach

Qiuping Jiang; Feng Shao; Weisi Lin; Gangyi Jiang

Predicting the degree of experienced visual comfort in the context of stereoscopic 3-D (S3D) viewing is particularly challenging. In this letter, a simple yet effective visual comfort assessment (VCA) approach for stereoscopic images is proposed from the perspective of learning to rank (L2R). The proposed L2R-based VCA (L2R-VCA) approach is inspired by the traditional absolute categorical rating (ACR) methodology in subjective study and is to characterize the qualitative description behavior of human subjective study. Experimental results on our recently built database confirm the promising performance of the proposed L2R-VCA approach, yielding higher consistency with human subject judgment results.


Displays | 2015

Visual discomfort relaxation for stereoscopic 3D images by adjusting zero-disparity plane for projection

Feng Shao; Zhutuan Li; Qiuping Jiang; Gangyi Jiang; Mei Yu; Zongju Peng

Abstract It is a challenging task to improve the visual comfort of a stereoscopic 3D (S3D) image with satisfactory viewing experience. In this paper, we propose a visual comfort improvement scheme by adjusting zero-disparity plane (ZDP) for projection. The degree of visual discomfort is predicted by considering three factors: spatial frequency, disparity response, and visual attention. Then, the selection of an optimal ZDP is guided by the predicted visual discomfort map. Finally, the disparity ranges of the crossed and uncrossed disparities are automatically adjusted according to the ZDP as requirements. Experiment results show that the proposed scheme is effective in improving visual comfort while preserving the unchanged depth sensation.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Learning Sparse Representation for Objective Image Retargeting Quality Assessment

Qiuping Jiang; Feng Shao; Weisi Lin; Gangyi Jiang

The goal of image retargeting is to adapt source images to target displays with different sizes and aspect ratios. Different retargeting operators create different retargeted images, and a key problem is to evaluate the performance of each retargeting operator. Subjective evaluation is most reliable, but it is cumbersome and labor-consuming, and more importantly, it is hard to be embedded into online optimization systems. This paper focuses on exploring the effectiveness of sparse representation for objective image retargeting quality assessment. The principle idea is to extract distortion sensitive features from one image (e.g., retargeted image) and further investigate how many of these features are preserved or changed in another one (e.g., source image) to measure the perceptual similarity between them. To create a compact and robust feature representation, we learn two overcomplete dictionaries to represent the distortion sensitive features of an image. Features including local geometric structure and global context information are both addressed in the proposed framework. The intrinsic discriminative power of sparse representation is then exploited to measure the similarity between the source and retargeted images. Finally, individual quality scores are fused into an overall quality by a typical regression method. Experimental results on several databases have demonstrated the superiority of the proposed method.


Pattern Recognition | 2018

Learning a Referenceless Stereopair Quality Engine with Deep Nonnegativity Constrained Sparse Autoencoder

Qiuping Jiang; Feng Shao; Weisi Lin; Gangyi Jiang

Abstract This paper proposes a no-reference (NR)/referenceless quality evaluation method for stereoscopic three-dimensional (S3D) images based on deep nonnegativity constrained sparse autoencoder (DNCSAE). To address the quality issue of stereopairs whose perceived quality is not only determined by the individual left and right image qualities but also their interactions, a three-column DNCSAE framework is customized with individual DNCSAE module coping with the left image, the right image, and the cyclopean image, respectively. In the proposed framework, each individual DNCSAE module shares the same network architecture consisting of multiple stacked NCSAE layers and one Softmax regression layer at the end. The contribution of our model is that hierarchical feature evolution and nonlinear feature mapping are jointly optimized in a unified and perceptual-aware deep network (DNCSAE), which well resembles several important visual properties, i.e., hierarchy, sparsity, and non-negativity. To be more specific, for each DNCSAE, by taking a set of handcrafted natural scene statistic (NSS) features as inputs in the visible layer, the features in hidden layers are successively evolved to deeper levels producing increasingly discriminative quality-aware features (QAFs). Then, QAFs in the last NCSAE layer are summarized to their corresponding quality score by Softmax regression. Finally, three individual yet complementary quality scores estimated by each DNCSAE model are combined based on a Bayesian framework to obtain an overall 3D quality score. Experiments on three benchmark databases demonstrate the superiority of our method in terms of both prediction accuracy and generalization capability.


Information Sciences | 2018

Local and Global Sparse Representation for No-Reference Quality Assessment of Stereoscopic Images

Fucui Li; Feng Shao; Qiuping Jiang; Randi Fu; Gangyi Jiang; Mei Yu

Abstract No-reference/blind quality assessment of stereoscopic 3D images is much more challenging than 2D images due to the poor understanding of binocular vision. In this paper, we propose a BLind Quality Evaluator for stereoscopic 3D images by learning Local and Global Sparse Representations (BLQELGSR). Specifically, at the training stage, we first construct a large-scale training set by simulating some common distortions that are likely encountered by stereoscopic images, and propose a multi-modal sparse representation framework to characterize the relationship between the feature and quality spaces for all sources of information from left, right and cyclopean views in local and global manners. At the testing stage, based on the derived 3D quality prediction framework, the local and global quality scores from different sources are predicted and combined to drive a final 3D quality score. Experimental results on three 3D image quality databases show that in comparison with the existing methods, the devised BLQELGSR can achieve better prediction performance to be in line with subjective assessment.


Optics Express | 2016

Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals

Feng Shao; Qiuping Jiang; Randi Fu; Mei Yu; Gangyi Jiang

Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.


IEEE Transactions on Circuits and Systems for Video Technology | 2018

Toward Domain Transfer for No-Reference Quality Prediction of Asymmetrically Distorted Stereoscopic Images

Feng Shao; Zhuqing Zhang; Qiuping Jiang; Weisi Lin; Gangyi Jiang

We have presented a no-reference quality prediction method for asymmetrically distorted stereoscopic images, which aims to transfer the information from source feature domain to its target quality domain using a label consistent K-singular value decomposition classification framework. To this end, we construct a category-deviation database for dictionary learning that assigns a label for each stereoscopic image to indicate if it is noticeable or unnoticeable by human eyes. Then, by incorporating a category consistent term into the objective function, we learn view-specific feature and quality dictionaries to establish a semantic framework between the source feature domain and the target quality domain. The quality pooling is comparatively simple and only needs to estimate the quality score based on the classification probability. The experimental results demonstrate the effectiveness of our blind metric.


IEEE Transactions on Image Processing | 2017

QoE-Guided Warping for Stereoscopic Image Retargeting

Feng Shao; Wenchong Lin; Weisi Lin; Qiuping Jiang; Gangyi Jiang

In the field of stereoscopic 3D (S3D) display, it is an interesting as well as meaningful issue to retarget the stereoscopic images to the target resolution, while the existing stereoscopic image retargeting methods do not fully take user’s Quality of Experience (QoE) into account. In this paper, we have presented a QoE-guided warping method for stereoscopic image retargeting, which retarget the stereoscopic image and adapt its depth range to the target display while promoting user’s QoE. Our method takes shape preservation, visual comfort preservation, and depth perception preservation energies into account, and simultaneously optimizes the 2D coordinates and depth information in 3D space. It also considers the specific viewing configuration in the visual comfort and depth perception preservation energy constraints. Experimental results on visually uncomfortable and comfortable stereoscopic images demonstrate that in comparison with the existing stereoscopic image retargeting methods, the proposed method can achieve a reasonable performance optimization among the QoE’s factors of image quality, visual comfort, and depth perception, leading to promising overall S3D experience.

Collaboration


Dive into the Qiuping Jiang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weisi Lin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yo-Sung Ho

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changhong Yu

Zhejiang Gongshang University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge