Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liangliang Nan is active.

Publication


Featured researches published by Liangliang Nan.


international conference on computer graphics and interactive techniques | 2012

A search-classify approach for cluttered indoor scene understanding

Liangliang Nan; Ke Xie; Andrei Sharf

We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.


international conference on computer graphics and interactive techniques | 2010

SmartBoxes for interactive urban reconstruction

Liangliang Nan; Andrei Sharf; Hao Zhang; Daniel Cohen-Or; Baoquan Chen

We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the users interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.


international conference on computer graphics and interactive techniques | 2011

Conjoining Gestalt rules for abstraction of architectural drawings

Liangliang Nan; Andrei Sharf; Ke Xie; Tien-Tsin Wong; Oliver Deussen; Daniel Cohen-Or; Baoquan Chen

We present a method for structural summarization and abstraction of complex spatial arrangements found in architectural drawings. The method is based on the well-known Gestalt rules, which summarize how forms, patterns, and semantics are perceived by humans from bits and pieces of geometric information. Although defining a computational model for each rule alone has been extensively studied, modeling a conjoint of Gestalt rules remains a challenge. In this work, we develop a computational framework which models Gestalt rules and more importantly, their complex interactions. We apply conjoining rules to line drawings, to detect groups of objects and repetitions that conform to Gestalt principles. We summarize and abstract such groups in ways that maintain structural semantics by displaying only a reduced number of repeated elements, or by replacing them with simpler shapes. We show an application of our method to line drawings of architectural models of various styles, and the potential of extending the technique to other computer-generated illustrations, and three-dimensional models.


Computers & Graphics | 2016

Reconstructing building mass models from UAV images

Minglei Li; Liangliang Nan; Neil Smith; Peter Wonka

We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method. Graphical abstractDisplay Omitted HighlightsA novel framework for automatic reconstruction of large scale urban scenes from UAV images.An object level point cloud segmentation algorithm and a roof extraction algorithm based on a regularized Markov random field formulation.An effective contour refinement method based on pivot point detection.


Journal of Computer Applications in Technology | 2010

Laser remanufacturing based on the integration of reverse engineering and laser cladding

Liangliang Nan; Weijun Liu; Kai Zhang

Laser remanufacturing has been used as an approach to refurbish or to improve the surface quality of high-priced parts. However, most of the existing systems lack measuring and modelling functions, which results in the uncertainty of the quality of end products. This paper presents a three-dimensional Laser Remanufacturing System (LRS) based on the integration of reverse engineering and laser cladding. A coaxial powder feeding system is developed to meet the requirement of three-dimensional laser cladding. Meanwhile, the geometric and mechanical properties of metal parts of layers fabricated by LRS are explored. In addition, the principal, advantages and applications of the LRS system are described.


european conference on computer vision | 2016

Manhattan-World Urban Reconstruction from Point Clouds

Minglei Li; Peter Wonka; Liangliang Nan

Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods.


International Journal of Digital Earth | 2016

Fitting boxes to Manhattan scenes using linear integer programming

Minglei Li; Liangliang Nan; Shaochuang Liu

ABSTRACT We propose an approach for automatic generation of building models by assembling a set of boxes using a Manhattan-world assumption. The method first aligns the point cloud with a per-building local coordinate system, and then fits axis-aligned planes to the point cloud through an iterative regularization process. The refined planes partition the space of the data into a series of compact cubic cells (candidate boxes) spanning the entire 3D space of the input data. We then choose to approximate the target building by the assembly of a subset of these candidate boxes using a binary linear programming formulation. The objective function is designed to maximize the point cloud coverage and the compactness of the final model. Finally, all selected boxes are merged into a lightweight polygonal mesh model, which is suitable for interactive visualization of large scale urban scenes. Experimental results and a comparison with state-of-the-art methods demonstrate the effectiveness of the proposed framework.


international conference on computer vision | 2017

PolyFit: Polygonal Surface Reconstruction from Point Clouds

Liangliang Nan; Peter Wonka

We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.


IEEE Transactions on Visualization and Computer Graphics | 2016

Automatic Constraint Detection for 2D Layout Regularization

Haiyong Jiang; Liangliang Nan; Dong-Ming Yan; Weiming Dong; Xiaopeng Zhang; Peter Wonka

In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.


Computer Graphics Forum | 2014

2D-D Lifting for Shape Reconstruction

Liangliang Nan; Andrei Sharf; Baoquan Chen

We present an algorithm for shape reconstruction from incomplete 3D scans by fusing together two acquisition modes: 2D photographs and 3D scans. The two modes exhibit complementary characteristics: scans have depth information, but are often sparse and incomplete; photographs, on the other hand, are dense and have high resolution, but lack important depth information. In this work we fuse the two modes, taking advantage of their complementary information, to enhance 3D shape reconstruction from an incomplete scan with a 2D photograph. We compute geometrical and topological shape properties in 2D photographs and use them to reconstruct a shape from an incomplete 3D scan in a principled manner. Our key observation is that shape properties such as boundaries, smooth patches and local connectivity, can be inferred with high confidence from 2D photographs. Thus, we register the 3D scan with the 2D photograph and use scanned points as 3D depth cues for lifting 2D shape structures into 3D. Our contribution is an algorithm which significantly regularizes and enhances the problem of 3D reconstruction from partial scans by lifting 2D shape structures into 3D. We evaluate our algorithm on various shapes which are loosely scanned and photographed from different views, and compare them with state‐of‐the‐art reconstruction methods.

Collaboration


Dive into the Liangliang Nan's collaboration.

Top Co-Authors

Avatar

Peter Wonka

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrei Sharf

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Minglei Li

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Weijun Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bernard Ghanem

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dong-Ming Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Haiyong Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Xie

Shenzhen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge