Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pengfei Xu is active.

Publication


Featured researches published by Pengfei Xu.


Integrative Zoology | 2015

Application of the Internet of Things (IOT) to Animal Ecology

Songtao Guo; Min Qiang; Xiaorui Luan; Pengfei Xu; Gang He; Xiaoyan Yin; Xuelin Jin; Luo Xi; Xiaojiang Chen; Dingyi Fang; Baoguo Li

For ecologists, understanding the reaction of animals to environmental changes is critical. Using networked sensor technology to measure wildlife and environmental parameters can provide accurate, real-time and comprehensive data for monitoring, research and conservation of wildlife. This paper reviews: (i) conventional detection technology; (ii) concepts and applications of the Internet of Things (IoT) in animal ecology; and (iii) the advantages and disadvantages of IoT. The current theoretical limits of IoT in animal ecology are also discussed. Although IoT offers a new direction in animal ecological research, it still needs to be further explored and developed as a theoretical system and applied to the appropriate scientific frameworks for understanding animal ecology.


Integrative Zoology | 2015

The application of the Internet of Things to animal ecology.

Songtao Guo; Qiang M; Luan X; Pengfei Xu; Gang He; Xiaoyan Yin; Xi L; Xuelin Jin; Shao J; Xiaojiang Chen; Dingyi Fang; Bo Li

For ecologists, understanding the reaction of animals to environmental changes is critical. Using networked sensor technology to measure wildlife and environmental parameters can provide accurate, real-time and comprehensive data for monitoring, research and conservation of wildlife. This paper reviews: (i) conventional detection technology; (ii) concepts and applications of the Internet of Things (IoT) in animal ecology; and (iii) the advantages and disadvantages of IoT. The current theoretical limits of IoT in animal ecology are also discussed. Although IoT offers a new direction in animal ecological research, it still needs to be further explored and developed as a theoretical system and applied to the appropriate scientific frameworks for understanding animal ecology.


Neural Computation | 2017

Unsupervised 2D Dimensionality Reduction with Adaptive Structure Learning

Xiaowei Zhao; Feiping Nie; Sen Wang; Jun Guo; Pengfei Xu; Xiaojiang Chen

In recent years, unsupervised two-dimensional (2D) dimensionality reduction methods for unlabeled large-scale data have made progress. However, performance of these degrades when the learning of similarity matrix is at the beginning of the dimensionality reduction process. A similarity matrix is used to reveal the underlying geometry structure of data in unsupervised dimensionality reduction methods. Because of noise data, it is difficult to learn the optimal similarity matrix. In this letter, we propose a new dimensionality reduction model for 2D image matrices: unsupervised 2D dimensionality reduction with adaptive structure learning (DRASL). Instead of using a predetermined similarity matrix to characterize the underlying geometry structure of the original 2D image space, our proposed approach involves the learning of a similarity matrix in the procedure of dimensionality reduction. To realize a desirable neighbors assignment after dimensionality reduction, we add a constraint to our model such that there are exact connected components in the final subspace. To accomplish these goals, we propose a unified objective function to integrate dimensionality reduction, the learning of the similarity matrix, and the adaptive learning of neighbors assignment into it. An iterative optimization algorithm is proposed to solve the objective function. We compare the proposed method with several 2D unsupervised dimensionality methods. K-means is used to evaluate the clustering performance. We conduct extensive experiments on Coil20, AT&T, FERET, USPS, and Yale data sets to verify the effectiveness of our proposed method.


international joint conference on artificial intelligence | 2018

Evaluating Brush Movements for Chinese Calligraphy: A Computer Vision Based Approach

Pengfei Xu; Lei Wang; Ziyu Guan; Xia Zheng; Xiaojiang Chen; Zhanyong Tang; Dingyi Fang; Xiaoqing Gong; Zheng Wang

Chinese calligraphy is a popular, highly esteemed art form in the Chinese cultural sphere and worldwide. Ink brushes are the traditional writing tool for Chinese calligraphy and the subtle nuances of brush movements have a great impact on the aesthetics of the written characters. However, mastering the brush movement is a challenging task for many calligraphy learners as it requires many years’ practice and expert supervision. This paper presents a novel approach to help Chinese calligraphy learners to quantify the quality of brush movements without expert involvement. Our approach extracts the brush trajectories from a video stream; it then compares them with example templates of reputed calligraphers to produce a score for the writing quality. We achieve this by first developing a novel neural network to extract the spatial and temporal movement features from the video stream. We then employ methods developed in the computer vision and signal processing domains to track the brush movement trajectory and calculate the score. We conducted extensive experiments and user studies to evaluate our approach. Experimental results show that our approach is highly accurate in identifying brush movements, yielding an average accuracy of 90%, and the generated score is within 3% of errors when compared to the one given by human experts.


CCF Chinese Conference on Computer Vision | 2015

A Novel Dynamic Character Grouping Approach Based on the Consistency Constraints

Pengfei Xu; Qiguang Miao; Ruyi Liu; Feng Chen; Xiaojiang Chen; Weike Nie

In optical character recognition, text strings are extracted from images so that it can be edited, formatted, indexed, searched, or translated. Characters should be grouped into text strings before recognition, but the existing methods cannot group characters accurately. This paper proposes a new approach to group characters into text strings based on the consistency constraints. According to the features of the characters in the topographic maps, three kinds of consistency constraints are proposed, which are the color, size and direction consistency constraint respectively. In the proposed method, due to the introduction of the color consistency constraint, the characters with different colors can be grouped well; and this method can deal with the curved character strings more accurately by the improved direction consistency constraint. The final experimental results show that this method can group the characters more accurately, and lay a good foundation for text recognition.


international joint conference on artificial intelligence | 2018

Robust Auto-Weighted Multi-View Clustering

Pengzhen Ren; Yun Xiao; Pengfei Xu; Jun Guo; Xiaojiang Chen; Xin Wang; Dingyi Fang

Multi-view clustering has played a vital role in realworld applications. It aims to cluster the data points into different groups by exploring complementary information of multi-view. A major challenge of this problem is how to learn the explicit cluster structure with multiple views when there is considerable noise. To solve this challenging problem, we propose a novel Robust Auto-weighted Multiview Clustering (RAMC), which aims to learn an optimal graph with exactly k connected components, where k is the number of clusters. `1-norm is employed for robustness of the proposed algorithm. We have validated this in the later experiment. The new graph learned by the proposed model approximates the original graphs of each individual view but maintains an explicit cluster structure. With this optimal graph, we can immediately achieve the clustering results without any further post-processing. We conduct extensive experiments to confirm the superiority and robustness of the proposed algorithm.


computer and communications security | 2018

Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach

Guixin Ye; Zhanyong Tang; Dingyi Fang; Zhanxing Zhu; Yansong Feng; Pengfei Xu; Xiaojiang Chen; Zheng Wang

Despite several attacks have been proposed, text-based CAPTCHAs are still being widely used as a security mechanism. One of the reasons for the pervasive use of text captchas is that many of the prior attacks are scheme-specific and require a labor-intensive and time-consuming process to construct. This means that a change in the captcha security features like a noisier background can simply invalid an earlier attack. This paper presents a generic, yet effective text captcha solver based on the generative adversarial network. Unlike prior machine-learning-based approaches that need a large volume of manually-labeled real captchas to learn an effective solver, our approach requires significantly fewer real captchas but yields much better performance. This is achieved by first learning a captcha synthesizer to automatically generate synthetic captchas to learn a base solver, and then fine-tuning the base solver on a small set of real captchas using transfer learning. We evaluate our approach by applying it to 33 captcha schemes, including 11 schemes that are currently being used by 32 of the top-50 popular websites including Microsoft, Wikipedia, eBay and Google. Our approach is the most capable attack on text captchas seen to date. It outperforms four state-of-the-art text-captcha solvers by not only delivering a significant higher accuracy on all testing schemes, but also successfully attacking schemes where others have zero chance. We show that our approach is highly efficient as it can solve a captcha within 0.05 second using a desktop GPU. We demonstrate that our attack is generally applicable because it can bypass the advanced security features employed by most modern text captcha schemes. We hope the results of our work can encourage the community to revisit the design and practical use of text captchas.


Multimedia Tools and Applications | 2018

Nighttime image Dehazing with modified models of color transfer and guided image filter

Bo Jiang; Hongqi Meng; Xiaolei Ma; Lin Wang; Yan Zhou; Pengfei Xu; Siyu Jiang; Xianjia Meng

Taking into account of the illumination characteristics of nighttime imaging, a new method for nighttime image dehazing is proposed in this paper. In the first place, based on the color transfer theory, the illumination level of nighttime hazy image can be artificially enhanced through flexibly selecting the reference image. In contrast to the classical model of color transfer with the strategy of overall to overall transfer, the modified model focuses on the different characteristics of various regions in the original image, and it works well even though the nighttime image is interfered by various artificial light sources. In the second place, the enhancement dehazing method based on the theory of guided image filtering is adopted since the key parameters of dehazing method using the atmospheric degradation model are difficult to obtain in the conditions of nighttime imaging. In addition, the key model parameters of guided image filter are selected according to the boundary information of original image rather than the original image itself, which makes it more advantageous for dehazing image taken on the hazy night. The experimental results show that the proposed method has better performance than the classical daytime dehazing methods. Additionally, our method exhibits superior effect compared to the well-known nighttime dehazing method in the aspects of suppressing color distortion and background illumination controlling. The evaluations of the experimental results are established on both the subjective and objective aspects, so the conclusion in this paper is more convincing.


Multimedia Tools and Applications | 2018

Artistic features extraction from chinese calligraphy works via regional guided filter with reference image

Lei Wang; Xiaoqing Gong; Yongqin Zhang; Pengfei Xu; Xiaojiang Chen; Dingyi Fang; Xia Zheng; Jun Guo

Chinese calligraphy is a unique visual art, and and is one of the material basis of China’s traditional cultural heritage. However, time had caused the old calligraphy works to weathering and damages, so it is necessary to utilize advanced technologies to protect those works. One of those technologies is digital imaging, and the obtained images by digital imaging can preserve the visual information of calligraphy works better, furthermore, they can be used in further researches. While the basic works for those researches are to extract the artistic features which include two elements, i.e., form and spirit. However, most of the existing methods only extract the form and ignore the characters’ spirit, especially they are insensitive to the slight variation in complex ink strokes. To solve these problems, this paper proposes an extraction method based on regional guided flter (RGF) with reference images, which is generated by KNN matting and used as the input image for RGF. Since RGF is sensitive to the slight variation of ink, so the detailed information of the inside of strokes can be detected better. Besides, unlike the past works, which filter the whole strokes, RGF filters the inside of strokes and edges in different windows respectively, which results in that the edges are preserved accurately. Results from a deployment of several famous Chinese calligraphy works demonstrate that our method can extract more accurate and complete form and spirit with lower error rate.


Multimedia Tools and Applications | 2018

Face detection of golden monkeys via regional color quantization and incremental self-paced curriculum learning

Pengfei Xu; Songtao Guo; Qiguang Miao; Baoguo Li; Xiaojiang Chen; Dingyi Fang

Animal detection plays a very vital role in wildlife protection and many other real life applications. In this paper, we focus on face detection of Golden monkeys who live in Qinling Mountains, Shaanxi province, China, and present a relatively complete face detection algorithm to detect these monkeys’ faces, which mainly includes three parts: the location of the monkeys’ bodies, the detection of th+e suspicious facial skin and the accurate detection of the true faces. Firstly, regional color quantization is proposed to quantize the HSV color space for the nature images with different sizes, and we can get the areas of the monkeys’ bodies according to the color distribution of the monkeys’ hair in the histogram of the quantized color space. Then the areas of suspicious facial skin can be extracted from these areas of the monkeys’ bodies. Further, we propose incremental self-paced curriculum learning (ISPCL) to detect the true monkeys’ faces accurately. In our method, regional color quantization can increase the color differences between the background and the monkeys’ hair, so that the segmented results have fewer background pixels. Besides, the basic idea of the incremental learning is introduced into the training process of SPCL, which is to simulate the process in which human learn something from easy samples to hard ones, this idea is able to improve the performances of face detection. The experimental results demonstrate that the proposed algorithm can locate the monkeys’ bodies in the images with different sizes, and can detect their faces effectively. This research lays a foundation for face recognition and behavior analysis of golden monkeys.

Collaboration


Dive into the Pengfei Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar

Feiping Nie

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siyu Jiang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge