Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinglu Wang is active.

Publication


Featured researches published by Jinglu Wang.


IEEE Transactions on Image Processing | 2014

Joint Segmentation of Images and Scanned Point Cloud in Large-Scale Street Scenes With Low-Annotation Cost

Honghui Zhang; Jinglu Wang; Tian Fang; Long Quan

We propose a novel method for the parsing of images and scanned point cloud in large-scale street environment. The proposed method significantly reduces the intensive labeling cost in previous works by automatically generating training data from the input data. The automatic generation of training data begins with the initialization of training data with weak priors in the street environment, followed by a filtering scheme to remove mislabeled training samples. We formulate the filtering as a binary labeling optimization problem over a conditional random filed that we call object graph, simultaneously integrating spatial smoothness preference and label consistency between 2D and 3D. Toward the final parsing, with the automatically generated training data, a CRF-based parsing method that integrates the coordination of image appearance and 3D geometry is adopted to perform the parsing of large-scale street scenes. The proposed approach is evaluated on city-scale Google Street View data, with an encouraging parsing performance demonstrated.


IEEE Transactions on Visualization and Computer Graphics | 2016

Image-Based Building Regularization Using Structural Linear Features

Jinglu Wang; Tian Fang; Qingkun Su; Siyu Zhu; Jingbo Liu; Shengnan Cai; Chiew-Lan Tai; Long Quan

Reconstructed building models using stereo-based methods inevitably suffer from noise, leading to the lack of regularity which is characterized by straightness of structural linear features and smoothness of homogeneous regions. We leverage the structural linear features embedded in the mesh to construct a novel surface scaffold structure for model regularization. The regularization comprises two iterative stages: (1) the linear features are semi-automatically proposed from images by exploiting photometric and geometric clues jointly; (2) the scaffold topology represented by spatial relations among the linear features is optimized according to data fidelity and topological rules, then the mesh is refined by adjusting itself to the consolidated scaffold. Our method has two advantages. First, the proposed scaffold representation is able to concisely describe semantic building structures. Second, the scaffold structure is embedded in the mesh, which can preserve the mesh connectivity and avoid stitching or intersecting surfaces in challenging cases. We demonstrate that our method can enhance structural characteristics and suppress irregularities in the building models robustly in some challenging datasets. Moreover, the regularization can significantly improve the results of general applications such as simplification and non-photorealistic rendering.


international conference on computer vision | 2015

Higher-Order CRF Structural Segmentation of 3D Reconstructed Surfaces

Jingbo Liu; Jinglu Wang; Tian Fang; Chiew-Lan Tai; Long Quan

In this paper, we propose a structural segmentation algorithm to partition multi-view stereo reconstructed surfaces of large-scale urban environments into structural segments. Each segment corresponds to a structural component describable by a surface primitive of up to the second order. This segmentation is for use in subsequent urban object modeling, vectorization, and recognition. To overcome the high geometrical and topological noise levels in the 3D reconstructed urban surfaces, we formulate the structural segmentation as a higher-order Conditional Random Field (CRF) labeling problem. It not only incorporates classical lower-order 2D and 3D local cues, but also encodes contextual geometric regularities to disambiguate the noisy local cues. A general higher-order CRF is difficult to solve. We develop a bottom-up progressive approach through a patch-based surface representation, which iteratively evolves from the initial mesh triangles to the final segmentation. Each iteration alternates between performing a prior discovery step, which finds the contextual regularities of the patch-based representation, and an inference step that leverages the regularities as higher-order priors to construct a more stable and regular segmentation. The efficiency and robustness of the proposed method is extensively demonstrated on real reconstruction models, yielding significantly better performance than classical mesh segmentation methods.


international conference on computer vision | 2013

Learning CRFs for Image Parsing with Adaptive Subgradient Descent

Honghui Zhang; Jingdong Wang; Ping Tan; Jinglu Wang; Long Quan

We propose an adaptive sub gradient descent method to efficiently learn the parameters of CRF models for image parsing. To balance the learning efficiency and performance of the learned CRF models, the parameter learning is iteratively carried out by solving a convex optimization problem in each iteration, which integrates a proximal term to preserve the previously learned information and the large margin preference to distinguish bad labeling and the ground truth labeling. A solution of sub gradient descent updating form is derived for the convex optimization problem, with an adaptively determined updating step-size. Besides, to deal with partially labeled training data, we propose a new objective constraint modeling both the labeled and unlabeled parts in the partially labeled training data for the parameter learning of CRF models. The superior learning efficiency of the proposed method is verified by the experiment results on two public datasets. We also demonstrate the powerfulness of our method for handling partially labeled training data.


Science in China Series F: Information Sciences | 2017

A robust three-stage approach to large-scale urban scene recognition

Jinglu Wang; Yonghua Lu; Jingbo Liu; Long Quan

To obtain the ultimate high-level description of urban scenes, we propose a three-stage approach to recognizing the 3D reconstructed scene with efficient representations. First, we develop a joint semantic labeling method to obtain a semantic labeling of the triangular mesh-based representation by exploiting both image features and geometric features. The labeling is formulated over a conditional random field (CRF) that incorporates local spacial smoothness and multi-view consistency. Then, based on the labeled reconstructed meshes, we refine the man-made object segmentation in the recomposed global orthographic map with a graph partition algorithm, and propagate the coherent segmentation to the entire 3D meshes. Finally, we propose to generate a compact, abstracted geometric representation for each man-made object which is more visually appealing than the original cluttered models. This abstraction algorithm also leverages CRF formation to partition building footprints into minimal sets of structural linear features which are then used to construct profiles for large-scale scenes. The proposed recognition approach is able to robustly handle reconstructions with poor geometry and connectivity, thanks to the higher order CRF formulations which impose the ubiquitous regularity priors in urban scenes. Each stage performs an individual and uncoupling task. The intensive experiments have demonstrated the superior performance of our approach in robustness, accuracy and applicability.


asian conference on pattern recognition | 2015

Structure-driven facade parsing with irregular patterns

Jinglu Wang; Chun Liu; Tianwei Shen; Long Quan

We propose a novel method for recognizing irregular patterns in facades. An irregular pattern is an incomplete 2D grid, representing the placements of repetitive structural architectural objects (e.g., windows), which is capable of being generalized to a variety of facade structures. To effectively recognize such a pattern, we jointly model objects and object structures in a unified Marked Point Process framework, where the architectural objects are abstracted as sparsely populated geometric entities and the pairwise spatially interactions are modeled as elliptical repulsion fields. To optimize the proposed model, we introduce a structure-driven Monte Carlo Markov Chain (MCMC) sampler, by which the irregular pattern hypotheses are iteratively constructed in a bottom-up manner and verified in a top-down manner. The solution space is explored more efficiently for fast convergence. Extensive experiments have shown the efficiency and accuracy of our method of parsing a large category of facades.


asian conference on computer vision | 2016

Color Correction for Image-Based Modeling in the Large

Tianwei Shen; Jinglu Wang; Tian Fang; Siyu Zhu; Long Quan

Current texture creation methods for image-based modeling suffer from color discontinuity issues due to drastically varying conditions of illumination, exposure and time during the image capturing process. This paper proposes a novel system that generates consistent textures for triangular meshes. The key to our system is a color correction framework for large-scale unordered image collections. We model the problem as a graph-structured optimization over the overlapping regions of image pairs. After reconstructing the mesh of the scene, we accurately calculate matched image regions by re-projecting images onto the mesh. Then the image collection is robustly adjusted using a non-linear least square solver over color histograms in an unsupervised fashion. Finally, a connectivity-preserving edge pruning method is introduced to accelerate the color correction process. This system is evaluated with crowdsourcing image collections containing medium-sized scenes and city-scale urban datasets. To the best of our knowledge, this system is the first consistent texturing system for image-based modeling that is capable of handling thousands of input images.


international conference on computer vision | 2017

Progressive Large Scale-Invariant Image Matching in Scale Space

Lei Zhou; Siyu Zhu; Tianwei Shen; Jinglu Wang; Tian Fang; Long Quan


arXiv: Computer Vision and Pattern Recognition | 2017

Parallel Structure from Motion from Local Increment to Global Averaging

Siyu Zhu; Tianwei Shen; Lei Zhou; Runze Zhang; Jinglu Wang; Tian Fang; Long Quan


Archive | 2015

Semantic Segmentation of Large-Scale Urban 3D Data with Low Annotation Cost

Jinglu Wang; Shiwei Li; Runze Zhang; Long Quan

Collaboration


Dive into the Jinglu Wang's collaboration.

Top Co-Authors

Avatar

Long Quan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tian Fang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Siyu Zhu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tianwei Shen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jingbo Liu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chiew-Lan Tai

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Honghui Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Zhou

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Runze Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chun Liu

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge