Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bao-Gang Hu is active.

Publication


Featured researches published by Bao-Gang Hu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Maximum Correntropy Criterion for Robust Face Recognition

Ran He; Wei-Shi Zheng; Bao-Gang Hu

In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l1norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.


IEEE Transactions on Image Processing | 2011

Robust Principal Component Analysis Based on Maximum Correntropy Criterion

Ran He; Bao-Gang Hu; Wei-Shi Zheng; Xiangwei Kong

Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.


Simulation | 2006

Structural Factorization of Plants to Compute Their Functional and Architectural Growth

Paul-Henry Cournède; Mengzhen Kang; Amélie Mathieu; Jean François Barczi; Hong-Pin Yan; Bao-Gang Hu; Philippe De Reffye

Numerical simulation of plant growth has been facing a bottleneck due to the cumbersome computation implied by the complex plant topological structure. In this article, the authors present a new mathematical model for plant growth, GreenLab, overcoming these difficulties. GreenLab is based on a powerful factorization of the plant structure. Fast simulation algorithms are derived for deterministic and stochastic trees. The computation time no longer depends on the number of organs and grows at most quadratically with the age of the plant. This factorization finds applications to build trees very efficiently, in the context of geometric models, and to compute biomass production and distribution, in the context of functional structural models.


international conference on machine learning | 2009

Robust feature extraction via information theoretic learning

Xiaotong Yuan; Bao-Gang Hu

In this paper, we present a robust feature extraction framework based on information-theoretic learning. Its formulated objective aims at simultaneously maximizing the Renyis quadratic information potential of features and the Renyis cross information potential between features and class labels. This objective function reaps the advantages in robustness from both redescending M-estimator and manifold regularization, and can be efficiently optimized via half-quadratic optimization in an iterative manner. In addition, the popular algorithms LPP, SRDA and LapRLS for feature extraction are all justified to be the special cases within this framework. Extensive comparison experiments on several real-world data sets, with contaminated features or labels, well validate the encouraging gain in algorithmic robustness from this proposed framework.


IEEE Transactions on Neural Networks | 2013

Two-Stage Nonnegative Sparse Representation for Large-Scale Face Recognition

Ran He; Wei-Shi Zheng; Bao-Gang Hu; Xiangwei Kong

This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decomposes the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected. Potential loss functions, including L1 , L2,1, and correntropy are studied. In the second stage, based on the learned metric and collaborative representation, we propose an efficient nonnegative sparse representation algorithm to find an approximation solution of sparse representation. According to the L1 ball theory in sparse representation, the approximated solution is unique and can be optimized efficiently. Then a filtering strategy is developed to avoid the computation of the sparse representation on the whole large-scale dataset. Moreover, theoretical analysis also gives the necessary condition for nonnegative least squares technique to find a sparse solution. Extensive experiments on several public databases have demonstrated that the proposed TSR approach, in general, achieves better classification accuracy than the state-of-the-art sparse representation methods. More importantly, a significant reduction of computational costs is reached in comparison with sparse representation classifier; this enables the TSR to be more suitable for robust face recognition on a large-scale dataset.


Neural Computation | 2011

A regularized correntropy framework for robust pattern recognition

Ran He; Wei-Shi Zheng; Bao-Gang Hu; Xiangwei Kong

This letter proposes a new multiple linear regression model using regularized correntropy for robust pattern recognition. First, we motivate the use of correntropy to improve the robustness of the classical mean square error (MSE) criterion that is sensitive to outliers. Then an l1 regularization scheme is imposed on the correntropy to learn robust and sparse representations. Based on the half-quadratic optimization technique, we propose a novel algorithm to solve the nonlinear optimization problem. Second, we develop a new correntropy-based classifier based on the learned regularization scheme for robust object recognition. Extensive experiments over several applications confirm that the correntropy-based l1 regularization can improve recognition accuracy and receiver operator characteristic curves under noise corruption and occlusion.


computer vision and pattern recognition | 2011

Nonnegative sparse coding for discriminative semi-supervised learning

Ran He; Wei-Shi Zheng; Bao-Gang Hu; Xiangwei Kong

An informative and discriminative graph plays an important role in the graph-based semi-supervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l0–l1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence, the sparse probability graph (SPG) is termed for representing the proposed method. The nonnegative sparse weights in the graph naturally serve as clustering indicators, benefiting for semi-supervised learning. More important, our approximation algorithm speeds up the computation of the nonnegative sparse coding, which is still a bottle-neck for any previous attempts of sparse non-negative graph learning. And it is much more efficient than using l1-norm sparsity technique for learning large scale sparse graph. Finally, for discriminative semi-supervised learning, an adaptive label propagation algorithm is also proposed to iteratively predict the labels of data on the SPG. Promising experimental results show that the nonnegative sparse coding is efficient and effective for discriminative semi-supervised learning.


pacific conference on computer graphics and applications | 2007

Fast Hydraulic Erosion Simulation and Visualization on GPU

Xing Mei; Philippe Decaudin; Bao-Gang Hu

Natural mountains and valleys are gradually eroded by rainfall and river flows. Physically-based modeling of this complex phenomenon is a major concern in producing realistic synthesized terrains. However, despite some recent improvements, existing algorithms are still computationally expensive, leading to a time-consuming process fairly impractical for terrain designers and 3D artists. In this paper, we present a new method to model the hydraulic erosion phenomenon which runs at interactive rates on todays computers. The method is based on the velocity field of the running water, which is created with an efficient shallow-water fluid model. The velocity field is used to calculate the erosion and deposition process, and the sediment transportation process. The method has been carefully designed to be implemented totally on GPU, and thus takes full advantage of the parallelism of current graphics hardware. Results from experiments demonstrate that the proposed method is effective and efficient. It can create realistic erosion effects by rainfall and river flows, and produce fast simulation results for terrains with large sizes.


Mathematics and Computers in Simulation | 2008

Analytical study of a stochastic plant growth model: Application to the GreenLab model

M.Z. Kang; Paul-Henry Cournède; P. de Reffye; Daniel Auclair; Bao-Gang Hu

A stochastic functional-structural model simulating plant development and growth is presented. The number of organs (internodes, leaves and fruits) produced by the model is not only a key intermediate variable for biomass production computation, but also an indicator of model complexity. To obtain their mean and variance through simulation is time-consuming and the results are approximate. In this paper, based on the idea of substructure decomposition, the theoretical mean and variance of the number of organs in a plant structure from the model are computed recurrently by applying a compound law of generating functions. This analytical method provides fast and precise results, which facilitates model analysis as well as model calibration and validation with real plants. Furthermore, the mean and variance of the biomass production from the stochastic plant model are of special interest linked to the prediction of yield. In this paper, through differential statistics, their approximate results are computed in an analytical way for any plant age. A case study on sample trees from this functional-structural model shows the theoretical moments of the number of organs and the biomass production, as well as the computation efficiency of the analytical method compared to a Monte-Carlo simulation method. The advantages and the drawbacks of this stochastic model for agricultural applications are discussed.


IEEE Transactions on Knowledge and Data Engineering | 2012

Agglomerative Mean-Shift Clustering

Xiaotong Yuan; Bao-Gang Hu; Ran He

Mean-Shift (MS) is a powerful nonparametric clustering method. Although good accuracy can be achieved, its computational cost is particularly expensive even on moderate data sets. In this paper, for the purpose of algorithmic speedup, we develop an agglomerative MS clustering method along with its performance analysis. Our method, namely Agglo-MS, is built upon an iterative query set compression mechanism which is motivated by the quadratic bounding optimization nature of MS algorithm. The whole framework can be efficiently implemented in linear running time complexity. We then extend Agglo-MS into an incremental version which performs comparably to its batch counterpart. The efficiency and accuracy of Agglo-MS are demonstrated by extensive comparing experiments on synthetic and real data sets.

Collaboration


Dive into the Bao-Gang Hu's collaboration.

Top Co-Authors

Avatar

Mengzhen Kang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ran He

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaotong Yuan

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Weiming Dong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hong-Ping Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xing Mei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Liang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shuang-Hong Yang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge