Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Runze Zhang is active.

Publication


Featured researches published by Runze Zhang.


european conference on computer vision | 2016

Graph-Based Consistent Matching for Structure-from-Motion

Tianwei Shen; Siyu Zhu; Tian Fang; Runze Zhang; Long Quan

Pairwise image matching of unordered image collections greatly affects the efficiency and accuracy of Structure-from-Motion (SfM). Insufficient match pairs may result in disconnected structures or incomplete components, while costly redundant pairs containing erroneous ones may lead to folded and superimposed structures. This paper presents a graph-based image matching method that tackles the issues of completeness, efficiency and consistency in a unified framework. Our approach starts by chaining all but singleton images using a visual-similarity-based minimum spanning tree. Then the minimum spanning tree is incrementally expanded to form locally consistent strong triplets. Finally, a global community-based graph algorithm is introduced to strengthen the global consistency by reinforcing potentially large connected components. We demonstrate the superior performance of our method in terms of accuracy and efficiency on both benchmark and Internet datasets. Our method also performs remarkably well on the challenging datasets of highly ambiguous and duplicated scenes.


international conference on computer vision | 2015

Joint Camera Clustering and Surface Segmentation for Large-Scale Multi-view Stereo

Runze Zhang; Shiwei Li; Tian Fang; Siyu Zhu; Long Quan

In this paper, we propose an optimal decomposition approach to large-scale multi-view stereo from an initial sparse reconstruction. The success of the approach depends on the introduction of surface-segmentation-based camera clustering rather than sparse-point-based camera clustering, which suffers from the problems of non-uniform reconstruction coverage ratio and high redundancy. In details, we introduce three criteria for camera clustering and surface segmentation for reconstruction, and then we formulate these criteria into an energy minimization problem under constraints. To solve this problem, we propose a joint optimization in a hierarchical framework to obtain the final surface segments and corresponding optimal camera clusters. On each level of the hierarchical framework, the camera clustering problem is formulated as a parameter estimation problem of a probability model solved by a General Expectation-Maximization algorithm and the surface segmentation problem is formulated as a Markov Random Field model based on the probability estimated by the previous camera clustering process. The experiments on several Internet datasets and aerial photo datasets demonstrate that the proposed approach method generates more uniform and complete dense reconstruction with less redundancy, resulting in more efficient multi-view stereo algorithm.


asian conference on computer vision | 2014

Multi-view Geometry Compression

Siyu Zhu; Tian Fang; Runze Zhang; Long Quan

For large-scale and highly redundant photo collections, eliminating statistical redundancy in multi-view geometry is of great importance to efficient 3D reconstruction. Our approach takes the full set of images with initial calibration and recovered sparse 3D points as inputs, and obtains a subset of views that preserve the final reconstruction accuracy and completeness well. We first construct an image quality graph, in which each vertex represents an input image, and the problem is then to determine a connected sub-graph guaranteeing a consistent reconstruction and maximizing the accuracy and completeness of the final reconstruction. Unlike previous works, which only address the problem of efficient structure from motion (SfM), our technique is highly applicable to the whole reconstruction pipeline, and solves the problems of efficient bundle adjustment, multi-view stereo (MVS), and subsequent variational refinement.


asian conference on computer vision | 2014

Multi-scale Tetrahedral Fusion of a Similarity Reconstruction and Noisy Positional Measurements

Runze Zhang; Tian Fang; Siyu Zhu; Long Quan

The fusion of a 3D reconstruction up to a similarity transformation from monocular videos and the metric positional measurements from GPS usually relies on the alignment of the two coordinate systems. When positional measurements provided by a low-cost GPS are corrupted by high-level noises, this approach becomes problematic. In this paper, we introduce a novel framework that uses similarity invariants to form a tetrahedral network of views for the fusion. Such a tetrahedral network decouples the alignment from the fusion to combat the high-level noises. Then, we update the similarity transformation each time a well-conditioned motion of cameras is successfully identified. Moreover, we develop a multi-scale sampling strategy to reduce the computational overload and to adapt the algorithm to different levels of noises. It is important to note that our optimization framework can be applied in both batch and incremental manners. Experiments on simulations and real datasets demonstrate the robustness and the efficiency of our method.


computer vision and pattern recognition | 2018

Very Large-Scale Global SfM by Distributed Motion Averaging

Siyu Zhu; Runze Zhang; Lei Zhou; Tianwei Shen; Tian Fang; Ping Tan; Long Quan


international conference on computer vision | 2017

Distributed Very Large Scale Bundle Adjustment by Global Camera Consensus

Runze Zhang; Siyu Zhu; Tian Fang; Long Quan


arXiv: Computer Vision and Pattern Recognition | 2017

Parallel Structure from Motion from Local Increment to Global Averaging

Siyu Zhu; Tianwei Shen; Lei Zhou; Runze Zhang; Jinglu Wang; Tian Fang; Long Quan


european conference on computer vision | 2018

GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints

Zixin Luo; Tianwei Shen; Lei Zhou; Siyu Zhu; Runze Zhang; Yao Yao; Tian Fang; Long Quan


arXiv: Computer Vision and Pattern Recognition | 2018

Learning and Matching Multi-View Descriptors for Registration of Point Clouds.

Lei Zhou; Siyu Zhu; Zixin Luo; Tianwei Shen; Runze Zhang; Mingmin Zhen; Tian Fang; Long Quan


Archive | 2017

Accurate, Scalable and Parallel Structure from Motion.

Siyu Zhu; Tianwei Shen; Lei Zhou; Runze Zhang; Tian Fang; Long Quan

Collaboration


Dive into the Runze Zhang's collaboration.

Top Co-Authors

Avatar

Long Quan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Siyu Zhu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tian Fang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tianwei Shen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Zhou

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jinglu Wang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shiwei Li

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yao Yao

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ping Tan

Simon Fraser University

View shared research outputs
Researchain Logo
Decentralizing Knowledge