Yuebin Wang
Beijing Normal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuebin Wang.
IEEE Transactions on Geoscience and Remote Sensing | 2016
Zhenxin Zhang; Liqiang Zhang; Xiaohua Tong; P. Takis Mathiopoulos; Bo Guo; Xianfeng Huang; Zhen Wang; Yuebin Wang
Point cloud classification plays a critical role in point cloud processing and analysis. Accurately classifying objects on the ground in urban environments from airborne laser scanning (ALS) point clouds is a challenge because of their large variety, complex geometries, and visual appearances. In this paper, a novel framework is presented for effectively extracting the shape features of objects from an ALS point cloud, and then, it is used to classify large and small objects in a point cloud. In the framework, the point cloud is split into hierarchical clusters of different sizes based on a natural exponential function threshold. Then, to take advantage of hierarchical point cluster correlations, latent Dirichlet allocation and sparse coding are jointly performed to extract and encode the shape features of the multilevel point clusters. The features at different levels are used to capture information on the shapes of objects of different sizes. This way, robust and discriminative shape features of the objects can be identified, and thus, the precision of the classification is significantly improved, particularly for small objects.
IEEE Transactions on Geoscience and Remote Sensing | 2016
Yuebin Wang; Liqiang Zhang; Xiaohua Tong; Liang Zhang; Zhenxin Zhang; Hao Liu; Xiaoyue Xing; P. Takis Mathiopoulos
With the emergence of huge volumes of high-resolution remote sensing images produced by all sorts of satellites and airborne sensors, processing and analysis of these images require effective retrieval techniques. To alleviate the dramatic variation of the retrieval accuracy among queries caused by the single image feature algorithms, we developed a novel graph-based learning method for effectively retrieving remote sensing images. The method utilizes a three-layer framework that integrates the strengths of query expansion and fusion of holistic and local features. In the first layer, two retrieval image sets are obtained by, respectively, using the retrieval methods based on holistic and local features, and the top-ranked and common images from both of the top candidate lists subsequently form graph anchors. In the second layer, the graph anchors as an expansion query retrieve six image sets from the image database using each individual feature. In the third layer, the images in the six image sets are evaluated for generating positive and negative data, and SimpleMKL is applied to learn suitable query-dependent fusion weights for achieving the final image retrieval result. Extensive experiments were performed on the UC Merced Land Use-Land Cover data set. The source code has been available at our website. Compared with other related methods, the retrieval precision is significantly enhanced without sacrificing the scalability of our approach.
IEEE Transactions on Geoscience and Remote Sensing | 2014
Zhen Wang; Liqiang Zhang; Tian Fang; P. Takis Mathiopoulos; Huamin Qu; Dong Chen; Yuebin Wang
A 3-D tree structure plays an important role in many scientific fields, including forestry and agriculture. For example, terrestrial laser scanning (TLS) can efficiently capture high-precision 3-D spatial arrangements and structure of trees as a point cloud. In the past, several methods to reconstruct 3-D trees from the TLS point cloud were proposed. However, in general, they fail to process incomplete TLS data. To address such incomplete TLS data sets, a new method that is based on a structure-aware global optimization approach (SAGO) is proposed. The SAGO first obtains the approximate tree skeleton from a distance minimum spanning tree (DMst) and then defines the stretching directions of the branches on the tree skeleton. Based on these stretching directions, the SAGO recovers missing data in the incomplete TLS point cloud. The DMst is applied again to obtain the refined tree skeleton from the optimized data, and the tree skeleton is smoothed by employing a Laplacian function. To reconstruct 3-D tree models, the radius of each branch section is estimated, and leaves are added to form the crown geometry. The developed methodology has been extensively evaluated by employing a dozen TLS point clouds of various types of trees. Both qualitative and quantitative performance evaluation results have indicated that the SAGO is capable of effectively reconstructing 3-D tree models from grossly incomplete TLS point clouds with significant amounts of missing data.
IEEE Transactions on Geoscience and Remote Sensing | 2016
Zhuqiang Li; Liqiang Zhang; Xiaohua Tong; Bo Du; Yuebin Wang; Liang Zhang; Zhenxin Zhang; Hao Liu; Jie Mei; Xiaoyue Xing; P. Takis Mathiopoulos
The ability to classify urban objects in large urban scenes from point clouds efficiently and accurately still remains a challenging task today. A new methodology for the effective and accurate classification of terrestrial laser scanning (TLS) point clouds is presented in this paper. First, in order to efficiently obtain the complementary characteristics of each 3-D point, a set of point-based descriptors for recognizing urban point clouds is constructed. This includes the 3-D geometry captured using the spin-image descriptor computed on three different scales, the mean RGB colors of the point in the camera images, the LAB values of that mean RGB, and the normal at each 3-D point. The initial 3-D labeling of the categories in urban environments is generated by utilizing a linear support vector machine classifier on the descriptors. These initial classification results are then first globally optimized by the multilabel graph-cut approach. These results are further refined automatically by a local optimization approach based upon the object-oriented decision tree that uses weak priors among urban categories which significantly improves the final classification accuracy. The proposed method has been validated on three urban TLS point clouds, and the experimental results demonstrate that it outperforms the state-of-the-art method in classification accuracy for buildings, trees, pedestrians, and cars.
International Journal of Applied Earth Observation and Geoinformation | 2015
Yuebin Wang; Liqiang Zhang; P. Takis Mathiopoulos; Hao Deng
Abstract To visualize large urban models efficiently, this paper presents a framework for generalizing urban building footprints and facade textures by using multiple Gestalt rules and a graph-cut-based energy function. First, an urban scene is divided into different blocks by main road networks. In each block, the building footprints are partitioned into potential Gestalt groups. A footprint may satisfy several Gestalt principles. We employ the graph-cut-based optimization function to obtain a consistent segmentation of the buildings into optimal Gestalt groups with minimal energy. The building footprints in each Gestalt group are aggregated into different levels of detail (LODs). Building facade textures are also abstracted and simplified into multiple LODs using the same approach as the building footprint simplification. An effective data structure termed SceneTree is introduced to manage these aggregated building footprints and facade textures. Combined with the parallelization scheme, the rendering efficiency of large-scale urban buildings is improved. Compared with other methods, our presented method can efficiently visualize large urban models and maintain the citys image.
IEEE Transactions on Geoscience and Remote Sensing | 2017
Yuebin Wang; Liqiang Zhang; Hao Deng; Jiwen Lu; Haiyang Huang; Liang Zhang; Jun Liu; Hong Tang; Xiaoyue Xing
To achieve high scene classification performance of high spatial resolution remote sensing images (HSR-RSIs), it is important to learn a discriminative space in which the distance metric can precisely measure both similarity and dissimilarity of features and labels between images. While the traditional metric learning methods focus on preserving interclass separability, label consistency (LC) is less involved, and this might degrade scene images classification accuracy. Aiming at considering intraclass compactness in HSR-RSIs, we propose a discriminative distance metric learning method with LC (DDML-LC). The DDML-LC starts from the dense scale invariant feature transformation features extracted from HSR-RSIs, and then uses spatial pyramid maximum pooling with sparse coding to encode the features. In the learning process, the intraclass compactness and interclass separability are enforced while the global and local LC after the feature transformation is constrained, leading to a joint optimization of feature manifold, distance metric, and label distribution. The learned metric space can scale to discriminate out-of-sample HSR-RSIs that do not appear in the metric learning process. Experimental results on three data sets demonstrate the superior performance of the DDML-LC over state-of-the-art techniques in HSR-RSI classification.
International Journal of Geographical Information Science | 2017
Yuebin Wang; Liqiang Zhang; Xiaohua Tong; Suhong Liu; Tian Fang
ABSTRACT Urban model retrieval has wide applications in the geoscience field, and it is also a very challenging research topic due to the blur and background clutter in query images and the large spatial inconsistencies between query and database images. In this study, a feature extraction and similarity metric-learning framework for urban model retrieval is proposed. In the method, the selective search voting algorithm is presented to automatically localize and segment a query object from an input image with the help of the top-ranked retrieved database images. Then, the local features of object images are extracted via sparse coding, and the global features are learned using the spatial constrained convolutional neural network. We utilize a new similarity metric to match the database images with a query object image. Finally, similar 3D models are retrieved. Both qualitative and quantitative experimental results indicate that the proposed framework can localize and segment a query object from an input image precisely and that the retrieval results are better than those of other related approaches.
international conference on computer vision | 2017
Fangyu Liu; Shuaipeng Li; Liqiang Zhang; Chenghu Zhou; Rongtian Ye; Yuebin Wang; Jiwen Lu
Isprs Journal of Photogrammetry and Remote Sensing | 2018
Panpan Zhu; Liqiang Zhang; Yuebin Wang; Jie Mei; Guoqing Zhou; Fangyu Liu; Weiwei Liu; P. Takis Mathiopoulos
IEEE Transactions on Geoscience and Remote Sensing | 2018
Yuebin Wang; Liqiang Zhang; Xiaohua Tong; Feiping Nie; Haiyang Huang; Jie Mei