Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ke Gu is active.

Publication


Featured researches published by Ke Gu.


Signal Processing | 2018

Saliency-induced reduced-reference quality index for natural scene and screen content images

Xiongkuo Min; Ke Gu; Guangtao Zhai; Menghan Hu; Xiaokang Yang

Abstract Massive content composed of both natural scene and screen content has been generated with the increasing use of wireless computing and cloud computing, which call for general image quality assessment (IQA) measures working for both natural scene images (NSIs) and screen content images (SCIs). In this paper, we develop a saliency-induced reduced-reference (SIRR) IQA measure for both NSIs and SCIs. Image quality and visual saliency are two widely studied and closely related research topics. Traditionally, visual saliency is often used as a weighting map in the final pooling stage of IQA. Instead, we detect visual saliency as a quality feature since different types and levels of degradation can strongly influence saliency detection. Image quality is described by the similarity between two images’ saliency maps. In SIRR, saliency is detected through a binary image descriptor called “image signature”, which significantly reduces the reference data. We perform extensive experiments on five large-scale NSI quality assessment databases including LIVE, TID2008, CSIQ, LIVEMD, CID2013, as well as two recently constructed SCI QA databases, i.e., SIQAD and QACS. Experimental results show that SIRR is comparable to state-of-the-art full-reference and reduced-reference IQA measures in NSIs, and it can outperform most competitors in SCIs. The most important is that SIRR is a cross-content-type measure, which works efficiently for both NSIs and SCIs. The MATLAB source code of SIRR will be publicly available with this paper.


IEEE Transactions on Image Processing | 2018

Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description

Ke Gu; Vinit Jakhetiya; Junfei Qiao; Xiaoli Li; Weisi Lin; Daniel Thalmann

New challenges have been brought out along with the emerging of 3D-related technologies, such as virtual reality, augmented reality (AR), and mixed reality. Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, and so on, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers’ attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the “blind” environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced-, and no-reference models.


IEEE Transactions on Industrial Electronics | 2018

Biologically Inspired Blind Quality Assessment of Tone-Mapped Images

Guanghui Yue; Chunping Hou; Ke Gu; Shasha Mao; Wenjun Zhang

Currently, many tone mapping operators (TMOs) have been provided to compress high dynamic range images to low dynamic range (LDR) images for visualizing them on the common displays. Since quality degradation is inevitably induced by compression, how to evaluate the obtained LDR images is indeed a headache problem. Until now, only a few full reference (FR) image quality assessment metrics have been proposed. However, they highly depend on reference image and neglect human visual system characteristics, hindering the practical applications. In this paper, we propose an effective blind quality assessment method of tone-mapped image without access to reference image. Inspired by that the performance of existing TMOs largely depend on the brightness and chromatic and structural properties of a scene, we evaluate the perceptual quality from the perspective of color information processing in the brain. Specifically, motivated by the physiological and psychological evidence, we simulate the responses of single-opponent (SO) and double-opponent (DO) cells, which play an important role in the processing of the color information. To represent the textural information, we extract three features from gray-level co-occurrence matrix (GLCM) calculated from SO responses. Meanwhile, both GLCM and local binary pattern descriptor are employed to extract texture and structure in the responses of DO cells. All these extracted features and associated subjective ratings are learned to reveal the connection between feature space and human opinion score. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art blind quality assessment methods and is comparable with the popular FR methods on two recently published tone-mapped image databases.


Journal of Visual Communication and Image Representation | 2017

No reference image blurriness assessment with local binary patterns

Guanghui Yue; Chunping Hou; Ke Gu; Nam Ling

Our method outperforms most recently proposed NR metrics and state-of-the-art blind sharpness/blurriness measures.Our method is inspired by the piece-wise autoregressive model parameter analysis.Our method is very simple. In this paper, we put forward an effective and efficient no reference image blurriness assessment metric on the basis of local binary pattern (LBP) features. In this proposal, we reveal that part of the LBP histogram bins present monotonously with the degree of blurriness. The proposed method contains the following steps. Firstly, the LBP maps of an input image are extracted with multiple radiuses. And then, the frequency of pattern histogram is analyzed before part of bins are chosen as the features. In addition, we also take the entropy of these bins as another feature. Finally, we learn the extracted features to predict the image blurriness score. Validation of the proposed method is conducted on the blurred images of LIVE-II, CSIQ, TID2008, TID2013, LIVE3D IQA Phase I and LIVE3D IQA Phase II. Experimental results demonstrate that compared with the state-of-the-art image quality assessment (IQA) methods, the proposed algorithm has notable advantage in correlation with subjective perception and computational complexity.


Journal of Visual Communication and Image Representation | 2017

An efficient and effective blind camera image quality metric via modeling quaternion wavelet coefficients

Lijuan Tang; Leida Li; Kezheng Sun; Zhifang Xia; Ke Gu; Jiansheng Qian

Abstract As an extension of Discrete and Complex Wavelet Transform, Quaternion Wavelet Transform (QWT) has attracted extensive attention in the past few years, because it can provide better analytic representation for 2D images. The QWT of an image consists of four parts, i.e., one magnitude part and three phase parts. The magnitude is nearly shift-invariant, which characterizes features at any spatial location, and the three phases represent the structure of these features. This indicates that QWT is more powerful in representing image structures, and thus is suitable for image quality evaluation. In this paper, an efficient and effective Camera Image Quality Metric (CIQM) is proposed based on QWT, which is utilized to describe the intrinsic structures of an image. For an image, it is first decomposed by QWT with three scales. Then, for each scale, the magnitude and entropy of the subband coefficients, and natural scene statistics of the third phase are calculated. The magnitude is utilized to describe the generalized spectral behavior, and the entropy is used to encode the generalized information of distortions. Since the third phase of QWT is considered to be texture feature, the natural scene statistics of the third phase of QWT is used to measure structure degradations in the proposed method. All these features reflect the self-similarity and independency of image content, which can effectively reflect image distortions. Finally, random forest is utilized to build the quality model. Experiments conducted on three camera image databases and two multiply distorted image databases have proved that CIQM outperforms the relevant state-of-the-art models for both authentically distorted images and multiply distorted images.


IEEE Transactions on Visualization and Computer Graphics | 2018

Evaluating Quality of Screen Content Images Via Structural Variation Analysis

Ke Gu; Junfei Qiao; Xiongkuo Min; Guanghui Yue; Weisi Lin; Daniel Thalmann

With the quick development and popularity of computers, computer-generated signals have drastically invaded into our daily lives. Screen content image is a typical example, since it also includes graphic and textual images as components as compared with natural scene images which have been deeply explored, and thus screen content image has posed novel challenges to current researches, such as compression, transmission, display, quality assessment, and more. In this paper, we focus our attention on evaluating the quality of screen content images based on the analysis of structural variation, which is caused by compression, transmission, and more. We classify structures into global and local structures, which correspond to basic and detailed perceptions of humans, respectively. The characteristics of graphic and textual images, e.g., limited color variations, and the human visual system are taken into consideration. Based on these concerns, we systematically combine the measurements of variations in the above-stated two types of structures to yield the final quality estimation of screen content images. Thorough experiments are conducted on three screen content image quality databases, in which the images are corrupted during capturing, compression, transmission, etc. Results demonstrate the superiority of our proposed quality model as compared with state-of-the-art relevant methods.


Journal of Visual Communication and Image Representation | 2017

A generic denoising framework via guided principal component analysis

Tao Dai; Zhiya Xu; Haoyi Liang; Ke Gu; Qingtao Tang; Yisen Wang; Weizhi Lu; Shu-Tao Xia

Though existing state-of-the-art denoising algorithms, such as BM3D, LPG-PCA and DDF, obtain remarkable results, these methods are not good at preserving details at high noise levels, sometimes even introducing non-existent artifacts. To improve the performance of these denoising methods at high noise levels, a generic denoising framework is proposed in this paper, which is based on guided principle component analysis (GPCA). The propose framework can be split into two stages. First, we use statistic test to generate an initial denoised image through back projection, where the statistical test can detect the significantly relevant information between the denoised image and the corresponding residual image. Second, similar image patches are collected to form different patch groups, and local basis are learned from each patch group by principle component analysis. Experimental results on natural images, contaminated with Gaussian and non-Gaussian noise, verify the effectiveness of the proposed framework.


Multimedia Tools and Applications | 2018

Reduced-reference quality assessment of DIBR-synthesized images based on multi-scale edge intensity similarity

Yu Zhou; Liu Yang; Leida Li; Ke Gu; Lijuan Tang

Depth-image-based-rendering (DIBR) plays an important role in view synthesis for free-viewpoint videos. The warping process in DIBR causes geometric displacement, which distributes intensively around edges, and the subsequent rendering process results in the impairment of edges. Traditional 2D image quality metrics are limited in the quality evaluation of DIBR-synthesized images. In this paper, we present a reduced-reference quality metric for DIBR-synthesized images by only extracting several feature values, namely multi-scale Edge Intensity Similarity (EIS). The original and synthesized images are first downsampled to generate images with different resolutions. Then an edge detection process is conducted on each scale and the edge intensity is calculated. The similarity of the edge intensity between each downsampled original image and the corresponding synthesized image is computed. Finally, the average similarity is calculated as the quality score of the DIBR-synthesized image. Experiments conducted on IRCCyN/IVC DIBR image and video databases demonstrate that the proposed method overall outperforms traditional 2D and existing DIBR-targeted quality metrics.


Neurocomputing | 2018

Referenceless quality metric of multiply-distorted images based on structural degradation

Tao Dai; Ke Gu; Li Niu; Yongbing Zhang; Weizhi Lu; Shu-Tao Xia

Abstract Multiply-distorted images, that is, distorted by different types of distortions simultaneously, are so common in real applications. This kind of images contain multiple overlaying stages (e.g., acquisition, compression and transmission stage). Each stage will introduce a certain type of distortion, for example, sensor noise in acquisition stage and compression artifacts in compression stage. However, most current blind/no-reference image quality assessment (NR-IQA) methods are specifically designed for singly-distorted images, thus resulting in their deficiency in handling multiply-distorted images. Motivated by the hypothesis that human visual system (HVS) is adapted to the structural information in images, we attempt to assess multiply-distorted images based on structural degradation. To this end, we use both first- and high-order image structures to design a novel referenceless quality metric for multiply-distorted images. Specifically, we leverage the quality-aware features extracted from both the gradient-magnitude map and contrast-normalized map, and further improve the performance by making use of redundancy of features with random subspace method. Experimental results on popular multiply-distorted image databases verify the outstanding performance of the proposed method.


IEEE Transactions on Industrial Informatics | 2018

Recurrent Air Quality Predictor Based on Meteorology- and Pollution-Related Factors

Ke Gu; Junfei Qiao; Weisi Lin

Air quality is currently arousing drastically increasing attention from the governments and populace all over the world. In this paper, we propose a heuristic recurrent air quality predictor (RAQP) to infer air quality. The RAQP exploits some key meteorology- and pollution-related variables to infer air pollutant concentrations (APCs), e.g. the fine particulate matter (PM2.5). It is natural that the meteorological factors and APCs at the current time have strong influences on air quality the next adjacent moment, that is to say, there exist high correlations between them. With this consideration, applying simple machine learners to the current meteorology- and pollution-related factors can reliably predict the air quality indices at a time later. However, owing to the nonlinear and chaotic reasons, the above correlations decline with the time interval enlarged. In such cases, it fails to forecast the air quality after several hours by only using simple machine learners and the current measurements of meteorology- and pollution-related variables. To solve the problem, our RAQP method recurrently applies the 1-h prediction model, which learns the current records of meteorology- and pollution-related factors to predict the air quality 1 h later, to then estimate the air quality after several hours. Via extensive experiments, results confirm that the RAQP predictor is superior to the relevant state-of-the-art techniques and nonrecurrent methods when applied to air quality prediction.

Collaboration


Dive into the Ke Gu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guangtao Zhai

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Junfei Qiao

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiongkuo Min

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Weisi Lin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge