Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoguang Li is active.

Publication


Featured researches published by Xiaoguang Li.


Journal of Visual Communication and Image Representation | 2009

Example-based image super-resolution with class-specific predictors

Xiaoguang Li; Kin-Man Lam; Guoping Qiu; Lansun Shen; Suyu Wang

Example-based super-resolution is a promising approach to solving the image super-resolution problem. However, the learning process can be slow and prediction can be inaccurate. In this paper, we present a novel learning-based algorithm for image super-resolution to improve the computational speed and prediction accuracy. Our new method classifies image patches into several classes, for each class, a class-specific predictor is designed. A class-specific predictor takes a low-resolution image patch as input and predicts a corresponding high-resolution patch as output. The performances of the class-specific predictors are evaluated using different datasets formed by face images and natural-scene images. We present experimental results which demonstrate that the new method provides improved performances over existing methods.


Journal of Visual Communication and Image Representation | 2007

An adaptive algorithm for the display of high-dynamic range images

Xiaoguang Li; Kin-Man Lam; Lansun Shen

A novel algorithm based on spatial and statistical information is proposed for the display of high-dynamic range (HDR) images. In our proposed algorithm, an image is first decomposed into a base layer and a detailed layer, which represent its smoothed and fine details, respectively. The problem of overall impression preservation is regarded as a global issue in our algorithm. Statistical-based histogram adjustment is employed to deal with the base layer. The reproduction of visual details is regarded as a local issue. The detailed layer obtained using a spatial filter is adaptively enhanced according to the mapping function used for the base layer. The main contributions of our algorithm are that: (1) an adaptive detail-enhancement method is proposed and (2) a gain map is defined to combine the local and global issues. Experimental results show the superior performance of our approach in terms of visual quality.


international conference on neural networks and signal processing | 2008

An efficient example-based approach for image super-resolution

Xiaoguang Li; Kin-Man Lam; Guoping Qiu; Lansun Shen; Suyu Wang

A novel algorithm for image super-resolution with class-specific predictors is proposed in this paper. In our algorithm, the training example images are classified into several classes, and each patch of a low-resolution image is classified into one of these classes. Each class has its high-frequency information inferred using a class-specific predictor, which is trained via the training samples from the same class. In this paper, two different types of training sets are employed to investigate the impact of the training database to be used. Experimental results have shown the superior performance of our method.


Neurocomputing | 2016

ORB feature based web pornographic image recognition

Li Zhuo; Zhen Geng; Jing Zhang; Xiaoguang Li

Taken the requirements of web pornographic image recognition both on precision and speed, a pornographic image recognition method based on Oriented FAST and Rotated BRIEF (ORB) is proposed in this paper. The whole recognition process can be divided into two parts: coarse detection and fine detection. Coarse detection can identify the non-pornographic images with no or fewer skin-color regions and facial images quickly. For the remaining images containing much more skin-color regions, fine detection is conducted, which includes three steps: (1) extract ORB descriptors from the skin-color regions and represent the descriptors based on Bag of Words (BOW) model, (2) construct the feature vector combining ORB feature with 72-dimensional Hue, Saturation, Value (HSV) color feature of the whole image, (3) train the classification model using Support Vector Machine (SVM) and apply it for image recognition. The experimental results show that the proposed method can obtain better recognition precision and drastically reduce the average time cost to 1/4 of the method based on Scale Invariant Feature Transform (SIFT).


international conference on signal processing | 2012

An object-based unequal encryption method for H.264 compressed surveillance videos

Yingdi Zhao; Li Zhuo; Mao Niansheng; Jing Zhang; Xiaoguang Li

Multimedia information security has become a research hot topic in recent years. In this paper, an object-based unequal encryption method for H.264 compressed surveillance videos is proposed on the basis of bit-sensitivity analysis of bitstream. In the proposed method, moving object detection and segmentation is performed first, thus location information of the objects can be directly extracted from the H.264 bitstream. Then object regions and background in the surveillance video are encrypted unequally. The bits with the highest sensitivity in the macroblocks covering the object region are thoroughly encrypted while those in the background macroblocks are selectively encrypted. Experimental results have shown that the proposed object-based unequal encryption method is able to effectively ensure the security of surveillance scenes, while the required complexity is low. Furthermore, object detection and encryption are entirely operated in the compressed domain and the output encrypted bitstream remains format-compliant with H.264 standard and the compression ratio can also be maintained.


Neurocomputing | 2016

A K-PLSR-based color correction method for TCM tongue images under different illumination conditions

Li Zhuo; Pei Zhang; Panling Qu; Yuanfan Peng; Jing Zhang; Xiaoguang Li

In this paper a Kernel Partial Least Squares Regression (K-PLSR)-based color correction method for Traditional Chinese Medicine (TCM) tongue images under different illumination conditions has been proposed. The captured values under different illumination conditions and their reference values of 24 patches in the Munsell colorchecker are respectively considered as the input and the output. The mapping model between the input and the output is established by using the K-PLSR method in the device-independent CIE LAB color space. The mapping model is then applied to correct the captured tongue images. Experimental results show that, using the proposed method, the average color difference of each color patch is only 0.821 after correction. For the subjective results, tongue images under different illumination conditions can obtain consistent correction results, which is beneficial for subsequent standardized tongue image storage and automatic analysis in tongue diagnosis of TCM. Compared with the most commonly used polynomial-based correction method and support vector regression based correction method, whether for the subjective or objective evaluation, the proposed method can obtain a superior color correction performance.


ieee international conference on progress in informatics and computing | 2015

A comparative study of local feature extraction algorithms for Web pornographic image recognition

Zhen Geng; Li Zhuo; Jing Zhang; Xiaoguang Li

Feature extraction algorithm plays an important part in content based pornographic image recognition. In this paper, the performances of six outstanding local feature extraction algorithms are compared and analysed in Web pornographic image recognition. The six algorithms include Scale Invariant Feature Transform, Speeded Up Robust Features, Oriented FAST and Rotated BRIEF, Fast Retina Keypoint, Binary Robust Invariant Scalable Keypoints and KAZE. Through the comparison experiments based on the same image recognition scheme, we conclude that the highest recognition precision can be obtained by SURF, and a good trade-off between recognition speed and precision can be achieved by ORB.


international conference on control, automation, robotics and vision | 2014

Automatic tongue color analysis of traditional Chinese medicine based on image retrieval

Li Zhuo; Pei Zhang; Bo Cheng; Xiaoguang Li; Jing Zhang

Content-Based Image Retrieval (CBIR) characterizes the image content by extracting visual features, and measures the similarity according to the distance between the two features. This paper adopts CBIR to perform automatic tongue color analysis of Traditional Chinese Medicine (TCM). Firstly, we extract the visual features of tongue images to be analyzed, especially the color features; and then retrieve the similar tongue images from the database, which have been labeled by TCM doctors in advance. Finally, statistical decision method is exploited based on the retrieval results to classify the tongue color. Experimental results show that the proposed method can achieve the classification accuracy of 87.85% and 88.54% respectively for the colors of tongue substance and tongue coating. The proposed method in this paper can provide a new means for the tongue color automatic analysis of TCM, and it is also a new application of CBIR.


ieee international conference on multimedia big data | 2016

Convolutional Neural Networks Based Pornographic Image Classification

Kailong Zhou; Li Zhuo; Zhen Geng; Jing Zhang; Xiaoguang Li

Considering the fact that pornographic images are flooding on the web, we propose a pornographic image recognition method based on convolutional neural network. This method can be divided into two parts: coarse detection and fine detection. Because majority of images are normal, we use coarse detecting to quickly identify the normal images with no or fewer skin-color regions and facial images. For the images which contain much more skin-color regions, they need further identification through fine detecting. At first, we trained the CNN using the strategy of pre-training mid-level features non-fixed fine-tuning, then based on the trained model, we can classify whether the image is pornographic or not. Compared with exiting methods, performance of our method is better than the state-of-the-art.


international conference on signal processing | 2012

A face hallucination algorithm via KPLS-eigentransformation model

Xiaoguang Li; Qing Xia; Li Zhuo; Kin-Man Lam

In this paper, we present a novel eigentransformation based algorithm for face hallucination. The traditional eigentransformation method is a linear subspace approach, which represents an image as a linear combination of training samples. Consequently, it cannot effectively represent the relationship between the low resolution facial images and the corresponding high-resolution version. In our algorithm, a Kernel Partial Least Squares (KPLS) predictor is introduced into the eigentransformation model for solving the High Resolution (HR) image form a Low Resolution (LR) facial image. We have compared our proposed method with some current Super Resolution (SR) algorithms using different zooming factors. Experimental results show that our algorithm provides improved performances over the compared methods in terms of both visual quality and numerical errors.

Collaboration


Dive into the Xiaoguang Li's collaboration.

Top Co-Authors

Avatar

Li Zhuo

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jing Zhang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kin-Man Lam

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Qiang Zhang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hui Zhang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lansun Shen

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Qing Xia

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Suyu Wang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Geng

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fenghui Li

Beijing University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge