Paul Merrell
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Merrell.
International Journal of Computer Vision | 2008
Marc Pollefeys; David Nistér; Jan Michael Frahm; Amir Akbarzadeh; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Seon Joo Kim; Paul Merrell; C. Salmi; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles
Abstract The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU’s to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.
international conference on computer vision | 2007
Paul Merrell; Amir Akbarzadeh; Liang Wang; Philippos Mordohai; Jan Michael Frahm; Ruigang Yang; David Nistér; Marc Pollefeys
We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thus remove errors and inconsistencies from the depth maps to produce a consistent surface. We advocate a two-stage process in which the first stage generates potentially noisy, overlapping depth maps from a set of calibrated images and the second stage fuses these depth maps to obtain an integrated surface with higher accuracy, suppressed noise, and reduced redundancy. We show that by dividing the processing into two stages we are able to achieve a very high throughput because we are able to use a computationally cheap stereo algorithm and because this architecture is amenable to hardware-accelerated (GPU) implementations. A rigorous formulation based on the notion of stability of a depth estimate is presented first. It aims to determine the validity of a depth estimate by rendering multiple depth maps into the reference view as well as rendering the reference depth map into the other views in order to detect occlusions and free- space violations. We also present an approximate alternative formulation that selects and validates only one hypothesis based on confidence. Both formulations enable us to perform video-based reconstruction at up to 25 frames per second. We show results on the multi-view stereo evaluation benchmark datasets and several outdoors video sequences. Extensive quantitative analysis is performed using an accurately surveyed model of a real building as ground truth.
international symposium on 3d data processing visualization and transmission | 2006
Amir Akbarzadeh; Jan Michael Frahm; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Paul Merrell; M. Phelps; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles; David Nistér; Marc Pollefeys
The paper introduces a data collection system and a processing pipeline for automatic geo-registered 3D reconstruction of urban scenes from video. The system collects multiple video streams, as well as GPS and INS measurements in order to place the reconstructed models in geo- registered coordinates. Besides high quality in terms of both geometry and appearance, we aim at real-time performance. Even though our processing pipeline is currently far from being real-time, we select techniques and we design processing modules that can achieve fast performance on multiple CPUs and GPUs aiming at real-time performance in the near future. We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system.
international conference on computer graphics and interactive techniques | 2011
Paul Merrell; Eric Schkufza; Zeyang Li; Maneesh Agrawala; Vladlen Koltun
We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates the layout guidelines as terms in a density function and generates layout suggestions by rapidly sampling the density function using a hardware-accelerated Monte Carlo sampler. Our results demonstrate that the suggestion generation functionality measurably increases the quality of furniture arrangements produced by participants with no prior training in interior design.
international conference on computer graphics and interactive techniques | 2010
Paul Merrell; Eric Schkufza; Vladlen Koltun
We present a method for automated generation of building layouts for computer graphics applications. Our approach is motivated by the layout design process developed in architecture. Given a set of high-level requirements, an architectural program is synthesized using a Bayesian network trained on real-world data. The architectural program is realized in a set of floor plans, obtained through stochastic optimization. The floor plans are used to construct a complete three-dimensional building with internal structure. We demonstrate a variety of computer-generated buildings produced by the presented approach.
interactive 3d graphics and games | 2007
Paul Merrell
Model synthesis is a new approach to 3D modeling which automatically generates large models that resemble a small example model provided by the user. Model synthesis extends the 2D texture synthesis problem into higher dimensions and can be used to model many different objects and environments. The user only needs to provide an appropriate example model and does not need to provide any other instructions about how to generate the model. Model synthesis can be used to create symmetric models, models that change over time, and models that fit soft constraints. There are two important differences between our method and existing texture synthesis algorithms. The first is the use of a global search to find potential conflicts before adding new material to the model. The second difference is that we divide the problem of generating a large model into smaller subproblems which are easier to solve.
Computer Graphics Forum | 2010
Jason Sewall; David Wilkie; Paul Merrell; Ming C. Lin
We present a novel method for the synthesis and animation of realistic traffic flows on large‐scale road networks. Our technique is based on a continuum model of traffic flow we extend to correctly handle lane changes and merges, as well as traffic behaviors due to changes in speed limit. We demonstrate how our method can be applied to the animation of many vehicles in a large‐scale traffic network at interactive rates and show that our method can simulate believable traffic flows on publicly‐available, real‐world road data. We furthermore demonstrate the scalability of this technique on many‐core systems.
international conference on computer graphics and interactive techniques | 2008
Paul Merrell; Dinesh Manocha
We present a novel method for procedurally modeling large complex shapes. Our approach is general-purpose and takes as input any 3D polyhedral model provided by a user. The algorithm exploits the connectivity between the adjacent boundary features of the input model and computes an output model that has similar connected features and resembles the input. No additional user input is needed to guide the model generation and the algorithm proceeds automatically. In practice, our algorithm is simple to implement and can generate a variety of complex shapes representing buildings, landscapes, and 3D fractal shapes in a few minutes.
IEEE Transactions on Visualization and Computer Graphics | 2011
Paul Merrell; Dinesh Manocha
We present a method for procedurally modeling general complex 3D shapes. Our approach can automatically generate complex models of buildings, man-made structures, or urban data sets in a few minutes based on user-defined inputs. The algorithm attempts to generate complex 3D models that resemble a user-defined input model and satisfy various dimensional, geometric, and algebraic constraints to control the shape. These constraints are used to capture the intent of the user and generate shapes that look more natural. We also describe efficient techniques to handle complex shapes and highlight its performance on many different types of models. We compare model synthesis algorithms with other procedural modeling techniques, discuss the advantages of different approaches, and describe as close connection between model synthesis and context-sensitive grammars.
Mobile robots. Conferenced | 2004
Paul Merrell; Dah-Jye Lee; Randal W. Beard
In order for an unmanned aerial vehicle (UAV) to safely fly close to the ground, it must be capable of detecting and avoiding obstacles in its flight path. From a single camera on the UAV, the 3D structure of its surrounding environment, including any obstacles, can be estimated from motion parallax using a technique called structure from motion. Most structure from motion algorithms attempt to reconstruct the 3D structure of the environment from a single optical flow value at each feature point. We present a novel method for calculating structure from motion that does not require a precise calculation of optical flow at each feature point. Due to the effects of image noise and the aperture problem, it may be impossible to accurately calculate a single optical flow value at each feature point. Instead we may only be able to calculate a set of likely optical flow values and their associated probabilities or an optical flow probability distribution. Using this probability distribution, a more robust method for calculating structure from motion is developed. This method is being developed for use on a UAV to detect obstacles, but it can be used on any vehicle where obstacle avoidance is needed.