Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guojun Lu is active.

Publication


Featured researches published by Guojun Lu.


Pattern Recognition Letters | 2016

Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors

Guohua Lv; Shyh Wei Teng; Guojun Lu

An analysis on two types of gradient information in SIFT-like descriptors.A technique which systematically utilizes both types of gradient information.A strategy for selecting matches to enhance discriminative power of descriptors. In this paper we will investigate the gradient utilization in building SIFT (Scale Invariant Feature Transform)-like descriptors for image registration. There are generally two types of gradient information, i.e. gradient magnitude and gradient occurrence, which can be used for building SIFT-like descriptors. We will provide a theoretical analysis on the effectiveness of each of the two types of gradient information when used individually. Based on our analysis, we will propose a novel technique which systematically uses both types of gradient information together for image registration. Moreover, we will propose a strategy to select keypoint matches with a higher discrimination. The proposed technique can be used for both mono-modal and multi-modal image registration. Our experimental results show that the proposed technique improves registration accuracy over existing SIFT-like descriptors.


Giscience & Remote Sensing | 2018

Segmentation of Airborne Point Cloud Data for Automatic Building Roof Extraction

Syed Ali Naqi Gilani; Mohammad Awrangjeb; Guojun Lu

Roof plane segmentation is a complex task since point cloud data carry no connection information and do not provide any semantic characteristics of the underlying scanned surfaces. Point cloud density, complex roof profiles, and occlusion add another layer of complexity which often encounter in practice. In this article, we present a new technique that provides a better interpolation of roof regions where multiple surfaces intersect creating non-manifold points. As a result, these geometric features are preserved to achieve automated identification and segmentation of the roof planes from unstructured laser data. The proposed technique has been tested using the International Society for Photogrammetry and Remote Sensing benchmark and three Australian datasets, which differ in terrain, point density, building sizes, and vegetation. The qualitative and quantitative results show the robustness of the methodology and indicate that the proposed technique can eliminate vegetation and extract buildings as well as their non-occluding parts from the complex scenes at a high success rate for building detection (between 83.9% and 100% per-object completeness) and roof plane extraction (between 73.9% and 96% per-object completeness). The proposed method works more robustly than some existing methods in the presence of occlusion and low point sampling as indicated by the correctness of above 95% for all the datasets.


international conference on image processing | 2016

Robust building roof segmentation using airborne point cloud data

Syed Ali Naqi Gilani; Mohammad Awrangjeb; Guojun Lu

Approximation of the geometric features is an essential step in point cloud segmentation and surface reconstruction. Often, the planar surfaces are estimated using principal component analysis (PCA), which is sensitive to noise and smooths the sharp features. Hence, the segmentation results into unreliable reconstructed surfaces. This article presents a point cloud segmentation method for building detection and roof plane extraction. It uses PCA for saliency feature estimation including surface curvature and point normal. However, the point normals around the anisotropic surfaces are approximated using a consistent isotropic sub-neighbourhood by Low-Rank Subspace with prior Knowledge (LRSCPK). The developed segmentation technique is tested using two real-world samples and two benchmark datasets. Per-object and per-area completeness and correctness results indicate the robustness of the approach and the quality of the reconstructed surfaces and extracted buildings.


digital image computing techniques and applications | 2015

Rotation Invariant Spatial Pyramid Matching for Image Classification

Priyabrata Karmakar; Shyh Wei Teng; Guojun Lu; Dengsheng Zhang

This paper proposes a new Spatial Pyramid representation approach for image classification. Unlike the conventional Spatial Pyramid, the proposed method is invariant to rotation changes in the images. This method works by partitioning an image into concentric rectangles and organizing them into a pyramid. Each pyramidal region is then represented using a histogram of visual words. Our experimental results show that our proposed method significantly outperforms the conventional method.


international conference on multimedia and expo | 2014

Automatic segmentation of LiDAR point cloud data at different height levels for 3D building extraction

S M Abdullah; Mohammad Awrangjeb; Guojun Lu

This paper presents a new LiDAR segmentation technique for automatic building detection and roof plane extraction. First, it uses a height threshold, based on the digital elevation model it divides the LiDAR point cloud into “ground” and “non-ground” points. Then, starting from the maximum LiDAR height, and decreasing the height at each iteration, it looks for points to form planar roof segments. At each height level, it clusters the points based on the distance and finds straight lines using the points. The nearest coplanar point to the midpoint of each line is used as a seed point and the plane is grown in a region growing fashion. Finally, a rule-based procedure is followed to remove planar segments in trees. The experimental results show that the proposed technique offers a high building detection and roof plane extraction rates while compared to other recently proposed techniques.


Pattern Recognition Letters | 2018

Enhancing image registration performance by incorporating distribution and spatial distance of local descriptors

Guohua Lv; Shyh Wei Teng; Guojun Lu

Abstract A data dependency similarity measure called mp-dissimilarity has been recently proposed. Unlike lp-norm distance which is widely used in calculating the similarity between vectors, mp-dissimilarity takes into account the relative positions of the two vectors with respect to the rest of the data. This paper investigates the potential of mp-dissimilarity in matching local image descriptors. Moreover, three new matching strategies are proposed by considering both lp-norm distance and mp-dissimilarity. Our proposed matching strategies are extensively evaluated against lp-norm distance and mp-dissimilarity on a few benchmark datasets. Experimental results show that mp-dissimilarity is a promising alternative to lp-norm distance in matching local descriptors. The proposed matching strategies outperform both lp-norm distance and mp-dissimilarity in matching accuracy. One of our proposed matching strategies is comparable to lp-norm distance in terms of recall vs 1-precision.


Multimedia Tools and Applications | 2018

COREG: a corner based registration technique for multimodal images

Guohua Lv; Shyh Wei Teng; Guojun Lu

This paper presents a COrner based REGistration technique for multimodal images (referred to as COREG). The proposed technique focuses on addressing large content and scale differences in multimodal images. Unlike traditional multimodal image registration techniques that rely on intensities or gradients for feature representation, we propose to use contour-based corners. First, curvature similarity between corners are for the first time explored for the purpose of multimodal image registration. Second, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the edges in the neighborhood of corners. Third, a simple yet effective way of estimating scale difference is proposed by making use of geometric relationships between corner triplets from the reference and target images. Using a set of benchmark multimodal images and multimodal microscopic images, we will demonstrate that our proposed technique outperforms a state-of-the-art multimodal image registration technique.


digital image computing techniques and applications | 2014

Automatic Building Footprint Extraction and Regularisation from LIDAR Point Cloud Data

Mohammad Awrangjeb; Guojun Lu

This paper presents a segmentation of LIDAR point cloud data for automatic extraction of building footprint. Using the ground height information from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. Once all the planar segments are extracted, a rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on a new feature-based technique. Corners and line-segments are extracted from each boundary and adjusted using the assumption that each short building side is parallel or perpendicular to one or more neighbouring long building sides. Experimental results on five Australian data sets show that the proposed method offers higher correctness rate in building footprint extraction than a state-of-the-art method.


digital image computing techniques and applications | 2014

A Novel Multi-Modal Image Registration Method Based on Corners

Guohua Lv; Shyh Wei Teng; Guojun Lu

This paper presents a novel method for registering multi-modal images, based on corners. The proposed method is motivated by the fact that large content differences are likely to occur in multi-modal images. Unlike traditional multi-modal image registration methods that utilize intensities or gradients for feature representation, we propose to use curvatures of corners. Moreover, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the neighborhood of corners. Curvature and DEPAC similarities are combined in our method to improve the registration accuracy. Using a set of benchmark multi-modal images and multi-modal microscopic images, we demonstrate that our proposed method outperforms an existing state-of-the-art image registration method.


Multimedia Tools and Applications | 2018

A detector of structural similarity for multi-modal microscopic image registration

Guohua Lv; Shyh Wei Teng; Guojun Lu

This paper presents a Detector of Structural Similarity (DSS) to minimize the visual differences between brightfield and confocal microscopic images. The context of this work is that it is very challenging to effectively register such images due to a low structural similarity in image contents. To address this issue, DSS aims to maximize the structural similarity by utilizing the intensity relationships among red-green-blue (RGB) channels in images. Technically, DSS can be combined with any multi-modal image registration technique in registering brightfield and confocal microscopic images. Our experimental results show that DSS significantly increases the visual similarity in such images, thereby improving the registration performance of an existing state-of-the-art multi-modal image registration technique by up to approximately 27%.

Collaboration


Dive into the Guojun Lu's collaboration.

Top Co-Authors

Avatar

Mohammad Awrangjeb

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar

Shyh Wei Teng

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar

Guohua Lv

Qilu University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dengsheng Zhang

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Priyabrata Karmakar

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guohua Lv

Qilu University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge