Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gaofeng Meng is active.

Publication


Featured researches published by Gaofeng Meng.


international conference on computer vision | 2013

Efficient Image Dehazing with Boundary Constraint and Contextual Regularization

Gaofeng Meng; Ying Wang; Jiangyong Duan; Shiming Xiang; Chunhong Pan

Images captured in foggy weather conditions often suffer from bad visibility. In this paper, we propose an efficient regularization method to remove hazes from a single input image. Our method benefits much from an exploration on the inherent boundary constraint on the transmission function. This constraint, combined with a weighted L1-norm based contextual regularization, is modeled into an optimization problem to estimate the unknown scene transmission. A quite efficient algorithm based on variable splitting is also presented to solve the problem. The proposed method requires only a few general assumptions and can restore a high-quality haze-free image with faithful colors and fine image details. Experimental results on a variety of haze images demonstrate the effectiveness and efficiency of the proposed method.


IEEE Transactions on Neural Networks | 2012

Discriminative Least Squares Regression for Multiclass Classification and Feature Selection

Shiming Xiang; Feiping Nie; Gaofeng Meng; Chunhong Pan; Changshui Zhang

This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Edge-Directed Single-Image Super-Resolution Via Adaptive Gradient Magnitude Self-Interpolation

Lingfeng Wang; Shiming Xiang; Gaofeng Meng; Huai-Yu Wu; Chunhong Pan

Super-resolution from a single image plays an important role in many computer vision systems. However, it is still a challenging task, especially in preserving local edge structures. To construct high-resolution images while preserving the sharp edges, an effective edge-directed super-resolution method is presented in this paper. An adaptive self-interpolation algorithm is first proposed to estimate a sharp high-resolution gradient field directly from the input low-resolution image. The obtained high-resolution gradient is then regarded as a gradient constraint or an edge-preserving constraint to reconstruct the high-resolution image. Extensive results have shown both qualitatively and quantitatively that the proposed method can produce convincing super-resolution images containing complex and sharp features, as compared with the other state-of-the-art super-resolution algorithms.


IEEE Transactions on Image Processing | 2014

Spectral Unmixing via Data-Guided Sparsity

Feiyun Zhu; Ying Wang; Bin Fan; Shiming Xiang; Gaofeng Meng; Chunhong Pan

Hyperspectral unmixing, the process of estimating a common set of spectral bases and their corresponding composite percentages at each pixel, is an important task for hyperspectral analysis, visualization, and understanding. From an unsupervised learning perspective, this problem is very challenging-both the spectral bases and their composite percentages are unknown, making the solution space too large. To reduce the solution space, many approaches have been proposed by exploiting various priors. In practice, these priors would easily lead to some unsuitable solution. This is because they are achieved by applying an identical strength of constraints to all the factors, which does not hold in practice. To overcome this limitation, we propose a novel sparsity-based method by learning a data-guided map (DgMap) to describe the individual mixed level of each pixel. Through this DgMap, the ℓp (0 <; p <; 1) constraint is applied in an adaptive manner. Such implementation not only meets the practical situation, but also guides the spectral bases toward the pixels under highly sparse constraint. What is more, an elegant optimization scheme as well as its convergence proof have been provided in this paper. Extensive experiments on several datasets also demonstrate that the DgMap is feasible, and high quality unmixing results could be obtained by our method.


Pattern Recognition | 2013

Level set evolution with locally linear classification for image segmentation

Ying Wang; Shiming Xiang; Chunhong Pan; Lingfeng Wang; Gaofeng Meng

This paper presents a novel local region-based level set model for image segmentation. In each local region, we define a locally weighted least squares energy to fit a linear classification function. The local energy is then integrated over the entire image domain to form an energy functional in terms of level set function. The energy minimization is achieved by level set evolution and estimation of parameters of the locally linear function in an iterative process. By introducing the locally linear functions to separate background and foreground in local regions, our model not only ensures the accuracy of the segmentation results, but also be very robust to initialization. Experiments are reported to demonstrate the effectiveness and efficiency of our model.


IEEE Signal Processing Letters | 2008

Affine Registration of Point Sets Using ICP and ICA

Shaoyi Du; Nanning Zheng; Gaofeng Meng; Zejian Yuan

This letter proposes a novel algorithm for affine registration of point sets in the way of incorporating an affine transformation into the iterative closest point (ICP) algorithm. At each iterative step of this algorithm, a closed-form solution of the affine transformation is derived. Similar to the ICP algorithm, this new algorithm converges monotonically to a local minimum from any given initial parameters. To get the best affine registration result, good initial parameters are required which are successfully estimated by using independent component analysis (ICA). Experimental results demonstrate the robustness and high accuracy of this algorithm.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Metric Rectification of Curved Document Images

Gaofeng Meng; Chunhong Pan; Shiming Xiang; Jiangyong Duan; Nanning Zheng

In this paper, we propose a metric rectification method to restore an image from a single camera-captured document image. The core idea is to construct an isometric image mesh by exploiting the geometry of page surface and camera. Our method uses a general cylindrical surface (GCS) to model the curved page shape. Under a few proper assumptions, the printed horizontal text lines are shown to be line convergent symmetric. This property is then used to constrain the estimation of various model parameters under perspective projection. We also introduce a paraperspective projection to approximate the nonlinear perspective projection. A set of close-form formulas is thus derived for the estimate of GCS directrix and document aspect ratio. Our method provides a straightforward framework for image metric rectification. It is insensitive to camera positions, viewing angles, and the shapes of document pages. To evaluate the proposed method, we implemented comprehensive experiments on both synthetic and real-captured images. The results demonstrate the efficiency of our method. We also carried out a comparative experiment on the public CBDAR2007 data set. The experimental results show that our method outperforms the state-of-the-art methods in terms of OCR accuracy and rectification errors.


international conference on document analysis and recognition | 2007

Document Images Retrieval Based on Multiple Features Combination

Gaofeng Meng; Nanning Zheng; Yonghong Song; Yuanlin Zhang

Retrieving the relevant document images from a great number of digitized pages with different kinds of artificial variations and documents quality deteriorations caused by scanning and printing is a meaningful and challenging problem. We attempt to deal with this problem by combining up multiple different kinds of document features in a hybrid way. Firstly, two new kinds of document image features based on the projection histograms and crossings number histograms of an image are proposed. Secondly, the proposed two features, together with density distribution feature and local binary pattern feature, are combined in a multistage structure to develop a novel document image retrieval system. Experimental results show that the proposed novel system is very efficient and robust for retrieving different kinds of document images, even if some of them are severely degraded.


International Journal of Computer Vision | 2015

Image Deblurring with Coupled Dictionary Learning

Shiming Xiang; Gaofeng Meng; Ying Wang; Chunhong Pan; Changshui Zhang

Image deblurring is a challenging problem in vision computing. Traditionally, this task is addressed as an inverse problem that is enclosed into the image itself. This paper presents a learning-based framework where the knowledge hidden in huge amounts of available data is explored and exploited for image deblurring. To this end, our algorithm is developed under the conceptual framework of coupled dictionary learning. Specifically, given pairs of blurred image patches and their corresponding clear ones, a learning model is constructed to learn a pair of dictionaries. Among them, one dictionary is responsible for the representation of clear images, while the other is responsible for that of the blurred images. Theoretically, the learning model is analyzed with coupled sparse representations for training samples. As the atoms of these dictionaries are coupled together one-by-one, the reconstruction information can be transmitted between the clear and blurry images. In application phase, the blurry dictionary is employed to reconstruct linearly the blurry image to be restored. Then, the reconstruction coefficients are kept unchanged along with the clear dictionary to restore the final results. The main advantage of our approach lies in that it works in the case of unknown blur kernels. Comparative experiments indicate the validity of our approach.


IEEE Transactions on Image Processing | 2010

Skew Estimation of Document Images Using Bagging

Gaofeng Meng; Chunhong Pan; Nanning Zheng; Chen Sun

This paper proposes a general-purpose method for estimating the skew angles of document images. Rather than to derive a skew angle merely from text lines, the proposed method exploits various types of visual cues of image skew available in local image regions. The visual cues are extracted by Radon transform and then outliers of them are iteratively rejected through a floating cascade. A bagging (bootstrap aggregating) estimator is finally employed to combine the estimations on the local image blocks. Our experimental results show significant improvements against the state-of-the-art methods, in terms of execution speed and estimation accuracy, as well as the robustness to short and sparse text lines, multiple different skews and the presence of nontextual objects of various types and quantities.

Collaboration


Dive into the Gaofeng Meng's collaboration.

Top Co-Authors

Avatar

Chunhong Pan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shiming Xiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lingfeng Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Nanning Zheng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ying Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dongcai Cheng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiangyong Duan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yonghong Song

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jianlong Chang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jie Gu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge