Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chaofeng Li is active.

Publication


Featured researches published by Chaofeng Li.


IEEE Transactions on Neural Networks | 2011

Blind Image Quality Assessment Using a General Regression Neural Network

Chaofeng Li; Alan C. Bovik; Xiaojun Wu

We develop a no-reference image quality assessment (QA) algorithm that deploys a general regression neural network (GRNN). The new algorithm is trained on and successfully assesses image quality, relative to human subjectivity, across a range of distortion types. The features deployed for QA include the mean value of phase congruency image, the entropy of phase congruency image, the entropy of the distorted image, and the gradient of the distorted image. Image quality estimation is accomplished by approximating the functional relationship between these features and subjective mean opinion scores using a GRNN. Our experimental results show that the new method accords closely with human subjective judgment.


Signal Processing-image Communication | 2010

Content-partitioned structural similarity index for image quality assessment

Chaofeng Li; Alan C. Bovik

The assessment of image quality is important in numerous image processing applications. Two prominent examples, the Structural Similarity Image (SSIM) index and Multi-scale Structural Similarity (MS-SSIM) operate under the assumption that human visual perception is highly adapted for extracting structural information from a scene. Results in large human studies have shown that these quality indices perform very well relative to other methods. However, the performance of SSIM and other Image Quality Assessment (IQA) algorithms are less effective when used to rate blurred and noisy images. We address this defect by considering a four-component image model that classifies image local regions according to edge and smoothness properties. In our approach, SSIM scores are weighted by region type, leading to modified versions of (G-)SSIM and MS-(G-)SSIM, called four-component (G-)SSIM (4-(G-)SSIM) and four-component MS-(G-)SSIM (4-MS-(G-)SSIM). Our experimental results show that our new approach provides results that are highly consistent with human subjective judgment of the quality of blurred and noisy images, and also deliver better overall performance than (G-)SSIM and MS-(G-)SSIM on the LIVE Image Quality Assessment Database.


Proceedings of SPIE | 2009

Three-component weighted structural similarity index

Chaofeng Li; Alan C. Bovik

The assessment of image quality is very important for numerous image processing applications, where the goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images in a manner that is consistent with human visual judgment. Two prominent examples, the Structural Similarity Image Metric (SSIM) and Multi-scale Structural Similarity (MS-SSIM) operate under the assumption that human visual perception is highly adapted for extracting structural information from a scene. Results in large human studies have shown that these quality indices perform very well relative to other methods. However, the performance of SSIM and other IQA algorithms are less effective when used to rate amongst blurred and noisy images. We address this defect by considering a three-component image model, leading to the development of modified versions of SSIM and MS-SSIM, which we call three component SSIM (3-SSIM) and three component MS-SSIM (3-MS-SSIM). A three-component image model was proposed by Ran and Farvardin, [13] wherein an image was decomposed into edges, textures and smooth regions. Different image regions have different importance for vision perception, thus, we apply different weights to the SSIM scores according to the region where it is calculated. Thus, four steps are executed: (1) Calculate the SSIM (or MS-SSIM) map. (2) Segment the original (reference) image into three categories of regions (edges, textures and smooth regions). Edge regions are found where a gradient magnitude estimate is large, while smooth regions are determined where the gradient magnitude estimate is small. Textured regions are taken to fall between these two thresholds. (3) Apply non-uniform weights to the SSIM (or MS-SSIM) values over the three regions. The weight for edge regions was fixed at 0.5, for textured regions it was fixed at 0.25, and at 0.25 for smooth regions. (4) Pool the weighted SSIM (or MS-SSIM) values, typically by taking their weighted average, thus defining a single quality index for the image (3-SSIM or 3-MS-SSIM). Our experimental results show that 3-SSIM (or 3-MS-SSIM) provide results consistent with human subjectivity when finding the quality of blurred and noisy images, and also deliver better performance than SSIM (and MS-SSIM) on five types of distorted images from the LIVE Image Quality Assessment Database.


Journal of Electronic Imaging | 2010

Content-weighted video quality assessment using a three-component image model

Chaofeng Li; Alan C. Bovik

Objective image and video quality measures play impor- tant roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full- reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have dif- ferent perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) (and peak signal-to-noise ratio (PSNR)) scores ac- cording to region. A frame-based video quality assessment algo- rithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.


PLOS ONE | 2014

Blind Image Blur Assessment Using Singular Value Similarity and Blur Comparisons

Qingbing Sang; Xiaojun Wu; Chaofeng Li; Yin Lu

The increasing number of demanding consumer image applications has led to increased interest in no-reference objective image quality assessment (IQA) algorithms. In this paper, we propose a new blind blur index for still images based on singular value similarity. The algorithm consists of three steps. First, a re-blurred image is produced by applying a Gaussian blur to the test image. Second, a singular value decomposition is performed on the test image and re-blurred image. Finally, an image blur index is constructed based on singular value similarity. The experimental results obtained on four simulated databases to demonstrate that the proposed algorithm has high correlation with human judgment when assessing blur or noise distortion of images.


Journal of Electronic Imaging | 2014

Universal blind image quality assessment using contourlet transform and singular-value decomposition

Qingbing Sang; Xiaojun Wu; Chaofeng Li; Yin Lu

Abstract. Most current state-of-the-art blind image quality assessment (IQA) algorithms usually require process training or learning. Here, we have developed a completely blind IQA model that uses features derived from an image’s contourlet transform and singular-value decomposition. The model is used to build algorithms that can predict image quality without any training or any prior knowledge of the images or their distortions. The new method consists of three steps: first, the contourlet transform is used on the image to obtain detailed high-frequency structural information from the image; second, the singular values of the just-obtained “structural image” are computed; and finally, two new universal blind IQA indices are constructed utilizing the area and slope of the truncated singular-value curves of the “structural image.” Experimental results on three open databases show that the proposed algorithms deliver quality predictions that have high correlations against human subjective judgments and are highly competitive with the state-of-the-art.


international conference on intelligent computing | 2006

A comparative study on improved fuzzy support vector machines and levenberg-marquardt-based BP network

Chaofeng Li; Lei Xu; Shitong Wang

The paper proposes an edge-effect training multi-class fuzzy support vector machine (EFSVM). It treats the training data points with different importance in the training process, and especially emphasizes primary contribution of these points distributed in edge area of data sets for classification, and then assigns them greater fuzzy membership degrees, thus assures that the nearer these points are away from edge area of training sets and the greater their contribution are. At the same time EFSVM is systematically compared to two other fuzzy support vector machines and a Levenberg-Marquardt-based BP algorithm (LMBP). The classification results for both Iris data and remote sensing image show that EFSVM is the best and may effectively enhance pattern classification accuracy.


International symposium on multispectral image processing and pattern recognition | 2005

Remote sensing image classification method based on support vector machines and fuzzy membership function

Chaofeng Li; Zhengyou Wang; Lei Xu

At present neural network models make great progress in remote sensing image classification, but these models have some serious limitations, such as easily getting stuck at a local minimum, converging too slowly and uneasy-fixed network structure. Support vector machines (SVMs) is a nonlinear mapping algorithm based on the Statistical Learning Theory, and developed over the last three decades by Vapnik, Chervonenkis et al. It gained extensive applications in pattern recognition and regression analysis etc. Compared the SVM algorithm with neural network models, the former is based on self-contained mathematics theory, and furthermore solves a global optimization problem and makes sure the result is not local minimum. These enable the SVM algorithm excellent classification performance. The paper proposed a new hybrid classification method that combines support vector machines with fuzzy membership function for remote sensing image. Firstly the method constructs multi-class Support Vector Machines classifier for remote sensing image, and discusses parameter estimation problem, and then uses RBF kernel SVM to classify whole remote sensing image. Secondly aiming at the disadvantage of SVM classifier that exists some mixed samples (one sample divided into two or more categories) and missed samples (one sample is not classified), and using fuzzy membership function method to reclassify these mixed and missed samples. Experimental results suggested the accuracy of this hybrid classifier is higher than single SVM method, or single fuzzy membership function decision method or BP neural network model.


Computational Visual Media | 2016

Learning multi-kernel multi-view canonical correlations for image recognition

Yun-Hao Yuan; Yun Li; Jianjun Liu; Chaofeng Li; Xiaobo Shen; Guoqing Zhang; Quansen Sun

Abstractcanonical correlations (M2CCs) framework for subspace learning. In the proposed framework, the input data of each original view are mapped into multiple higher dimensional feature spaces by multiple nonlinear mappings determined by different kernels. This makes M2CC can discover multiple kinds of useful information of each original view in the feature spaces. With the framework, we further provide a specific multi-view feature learning method based on direct summation kernel strategy and its regularized version. The experimental results in visual recognition tasks demonstrate the effectiveness and robustness of the proposed method.


chinese conference on biometric recognition | 2015

Discriminative Scatter Regularized CCA for Multiview Image Feature Learning and Recognition

Yun-Hao Yuan; Jinlong Yang; Xiaobo Shen; Chaofeng Li; Xiaojun Wu

In this paper, we propose a novel supervised canonical correlation analysis approach based on discriminative scatter regularization for multiview image feature learning. This method at the same time considers the between-view correlations and within-view class label information of training samples. The proposed method is applied to handwritten digit image recognition. The experimental results on multiple feature dataset demonstrate the superior performance of our approach compared with the existing multiview feature learning methods.

Collaboration


Dive into the Chaofeng Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Quansen Sun

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaobo Shen

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yun Li

Yangzhou University

View shared research outputs
Researchain Logo
Decentralizing Knowledge