Ruisheng Wang
University of Calgary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruisheng Wang.
International Journal of Image and Data Fusion | 2013
Ruisheng Wang
3D modeling from images and LiDAR (Light Detection And Ranging) has been an active research area in the photogrammetry, computer vision, and computer graphics communities. In terms of literature review, a comprehensive survey on 3D building modeling that contains methods from all these fields will be beneficial. This article attempts to survey the state-of-the-art 3D building modeling methods in the areas of photogrammetry, computer vision, and computer graphics. The existing methods are grouped into three categories: 3D reconstruction from images, 3D modeling using range data, and 3D modeling using images and range data. The use of both data for 3D modeling is a sensor fusion approach, in which methods of image-to-LiDAR registration, upsampling, and image-guided segmentation are reviewed. For each category, the key problems are identified and solutions are addressed.
advances in geographic information systems | 2009
Xin Chen; Brad Kohlmeyer; Matei Stroila; Narayanan Alwar; Ruisheng Wang; Jeff Bach
This paper presents a novel method to process large scale, ground level Light Detection and Ranging (LIDAR) data to automatically detect geo-referenced navigation attributes (traffic signs and lane markings) corresponding to a collection travel path. A mobile data collection device is introduced. Both the intensity of the LIDAR light return and 3-D information of the point clouds are used to find retroreflective, painted objects. Panoramic and high definition images are registered with 3-D point clouds so that the content of the sign and color can subsequently be extracted.
workshop on applications of computer vision | 2011
Ruisheng Wang; Jeff Bach; Frank P. Ferrie
We present an automatic approach to window and façade detection from LiDAR (Light Detection And Ranging) data collected from a moving vehicle along streets in urban environments. The proposed method combines bottom-up with top-down strategies to extract façade planes from noisy LiDAR point clouds. The window detection is achieved through a two-step approach: potential window point detection and window localization. The facade pattern is automatically inferred to enhance the robustness of the window detection. Experimental results on six datasets result in 71.2% and 88.9% in the first two datasets, 100% for the rest four datasets in terms of completeness rate, and 100% correctness rate for all the tested datasets, which demonstrate the effectiveness of the proposed solution. The application potential includes generation of building facade models with street-level details and texture synthesis for producing realistic occlusion-free façade texture.
computer vision and pattern recognition | 2012
Ruisheng Wang; Frank P. Ferrie; Jane Macfarlane
We present an automatic mutual information (MI) registration method for mobile LiDAR and panoramas collected from a driving vehicle. The suitability of MI for registration of aerial LiDAR and aerial oblique images has been demonstrated in [17], under an assumption that minimization of joint entropy (JE) is a sufficient approximation of maximization of MI. In this paper, we show that this assumption is invalid for the ground-level data. The entropy of a LiDAR image can not be regarded as approximately constant for small perturbations. Instead of minimizing the JE, we directly maximize MI to estimate corrections of camera poses. Our method automatically registers mobile LiDAR with spherical panoramas over an approximate 4 kilometer drive, and is the first example we are aware of that tests mutual information registration in large-scale context.
workshop on applications of computer vision | 2012
Ruisheng Wang; Jeff Bach; Jane Macfarlane; Frank P. Ferrie
We present a novel method to upsample mobile LiDAR data using panoramic images collected in urban environments. Our method differs from existing methods in the following aspects: First, we consider point visibility with respect to a given viewpoint, and use only visible points for interpolation; second, we present a multi-resolution depth map based visibility computation method; third, we present ray casting methods for upsampling mobile LiDAR data incorporating constraints from color information of spherical images. The experiments show the effectiveness of the proposed approach.
computer vision and pattern recognition | 2008
Ruisheng Wang; Frank P. Ferrie
This paper presents a new method for reconstructing rectilinear buildings from single images under the assumption of flat terrain. An intuition of the method is that, given an image composed of rectilinear buildings, the 3D buildings can be geometrically reconstructed by using the image only. The recovery algorithm is formulated in terms of two objective functions which are based on the equivalence between the vector normal to the interpretation plane in the image space and the vector normal to the rotated interpretation plane in the object space. These objective functions are minimized with respect to the camera pose, the building dimensions, locations and orientations to obtain estimates for the structure of the scene. The method potentially provides a solution for large-scale urban modelling using aerial images, and can be easily extended to deal with piecewise planar objects in a more general situation.
ISPRS international journal of geo-information | 2016
Jia Qiu; Ruisheng Wang
We propose a new segmentation and grouping framework for road map inference from GPS traces. We first present a progressive Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm with an orientation constraint to partition the whole point set of the traces into clusters that represent road segments. A new point cluster grouping algorithm, according to the topological relationship and spatial proximity of the point clusters to recover the road network, is then developed. After generating the point clusters, the robust Locally-Weighted Scatterplot Smooth (Lowess) method is used to extract their centerlines. We then propose to build the topological relationship of the centerlines by a Hidden Markov Model (HMM)-based map matching algorithm; and to assess whether the spatial proximity between point clusters by assuming the distances from the points to the centerline comply with a Gaussian distribution. Finally, the point clusters are grouped according to their topological relationship and spatial proximity to form strokes for recovering the road map. Experimental results show that our algorithm is robust to noise and varied sampling rates. The generated road maps show high geometric accuracy.
canadian conference on artificial intelligence | 2014
Jia Qiu; Ruisheng Wang; Xin Wang
In this paper, we proposed a new segmentation-and-grouping framework for road map inference from sparsely-sampled GPS traces. First, we extended DBSCAN with the orientation constraint to partition the whole point set of traces to clusters representing road segments. Second, we proposed an adaptive k-means algorithm that the k value is determined by an angle threshold to reconstruct nearly straight line segments. Third, the line segments are grouped according to the ‘Good Continuity’ principle of Gestalt Law to form a ‘Stroke’ for recovering the road map. Experiment results show that our algorithm is robust to noise and sampling rate. In comparison with previous work, our method has advantages to infer the road maps from sparsely-sampled GPS traces.
Photogrammetric Engineering and Remote Sensing | 2012
Ruisheng Wang; Frank P. Ferrie; Jane Macfarlane
This article describes how mobile lidar (light detection and ranging) data collection is a rapidly emerging technology where multiple georeferenced sensors (e.g., laser scanners, cameras) are mounted on a moving vehicle to collect real world data. The photorealistic modeling of large-scale real world scenes such as urban environments has become increasingly interesting to the vision, graphics, and photogrammetry communities. The article presents an automatic approach to window and facade detection from mobile lidar data. The proposed method combines bottom-up with top-down strategies to extract facade planes from noise lidar point clouds. The window detection is achieved through a two-step approach: potential window point detection and window localization. The facade pattern is automatically inferred to enhance the robustness of the window detection. The application potential presented in this article includes the generation of building facade models with street-level details and texture synthesis for producing realistic occlusion-free facade texture.
Computer Vision and Image Understanding | 2016
Tianshu Yu; Ruisheng Wang
An effective scene parsing framework via graph matching guidance on street-level data is proposed.Graph matching is introduced to partially match image components taking into account the regional similarity of scenes.The proposed algorithm can be applied to small training and testing sets, and achieves competitive parsing performance. Scene parsing, using both images and range data, is one of the key problems in computer vision and robotics. In this paper, a street scene parsing scheme that takes advantages of images from perspective cameras and range data from LiDAR is presented. First, pre-processing on the image set is performed and the corresponding point cloud is segmented according to semantics and transformed into an image pose. A graph matching approach is introduced into our parsing framework, in order to identify similar sub-regions from training and test images in terms of both local appearance and spatial structure. By using the sub-graphs inherited from training images, as well as the cues obtained from point clouds, this approach can effectively interpret the street scene via a guided MRF inference. Experimental results show a promising performance of our approach.