Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Sweeney is active.

Publication


Featured researches published by Chris Sweeney.


international conference on computer vision | 2015

Optimizing the Viewing Graph for Structure-from-Motion

Chris Sweeney; Torsten Sattler; Tobias Höllerer; Matthew Turk; Marc Pollefeys

The viewing graph represents a set of views that are related by pairwise relative geometries. In the context of Structure-from-Motion (SfM), the viewing graph is the input to the incremental or global estimation pipeline. Much effort has been put towards developing robust algorithms to overcome potentially inaccurate relative geometries in the viewing graph during SfM. In this paper, we take a fundamentally different approach to SfM and instead focus on improving the quality of the viewing graph before applying SfM. Our main contribution is a novel optimization that improves the quality of the relative geometries in the viewing graph by enforcing loop consistency constraints with the epipolar point transfer. We show that this optimization greatly improves the accuracy of relative poses in the viewing graph and removes the need for filtering steps or robust algorithms typically used in global SfM methods. In addition, the optimized viewing graph can be used to efficiently calibrate cameras at scale. We combine our viewing graph optimization and focal length calibration into a global SfM pipeline that is more efficient than existing approaches. To our knowledge, ours is the first global SfM pipeline capable of handling uncalibrated image sets.


european conference on computer vision | 2014

gDLS: A Scalable Solution to the Generalized Pose and Scale Problem

Chris Sweeney; Victor Fragoso; Tobias Höllerer; Matthew Turk

In this work, we present a scalable least-squares solution for computing a seven degree-of-freedom similarity transform. Our method utilizes the generalized camera model to compute relative rotation, translation, and scale from four or more 2D-3D correspondences. In particular, structure and motion estimations from monocular cameras lack scale without specific calibration. As such, our methods have applications in loop closure in visual odometry and registering multiple structure from motion reconstructions where scale must be recovered. We formulate the generalized pose and scale problem as a minimization of a least squares cost function and solve this minimization without iterations or initialization. Additionally, we obtain all minima of the cost function. The order of the polynomial system that we solve is independent of the number of points, allowing our overall approach to scale favorably. We evaluate our method experimentally on synthetic and real datasets and demonstrate that our methods produce higher accuracy similarity transform solutions than existing methods.


acm multimedia | 2015

Theia: A Fast and Scalable Structure-from-Motion Library

Chris Sweeney; Tobias Höllerer; Matthew Turk

In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.


international conference on 3d vision | 2014

Solving for Relative Pose with a Partially Known Rotation is a Quadratic Eigenvalue Problem

Chris Sweeney; John Flynn; Matthew Turk

We propose a novel formulation of minimal case solutions for determining the relative pose of perspective and generalized cameras given a partially known rotation, namely, a known axis of rotation. An axis of rotation may be easily obtained by detecting vertical vanishing points with computer vision techniques, or with the aid of sensor measurements from a smart phone. Given a known axis of rotation, our algorithms solve for the angle of rotation around the known axis along with the unknown translation. We formulate these relative pose problems as Quadratic Eigen value Problems which are very simple to construct. We run several experiments on synthetic and real data to compare our methods to the current state-of-the-art algorithms. Our methods provide several advantages over alternatives methods, including efficiency and accuracy, particularly in the presence of image and sensor noise as is often the case for mobile devices.


european conference on computer vision | 2014

On Sampling Focal Length Values to Solve the Absolute Pose Problem

Torsten Sattler; Chris Sweeney; Marc Pollefeys

Estimating the absolute pose of a camera relative to a 3D representation of a scene is a fundamental step in many geometric Computer Vision applications. When the camera is calibrated, the pose can be computed very efficiently. If the calibration is unknown, the problem becomes much harder, resulting in slower solvers or solvers requiring more samples and thus significantly longer run-times for RANSAC. In this paper, we challenge the notion that using minimal solvers is always optimal and propose to compute the pose for a camera with unknown focal length by randomly sampling a focal length value and using an efficient pose solver for the now calibrated camera. Our main contribution is a novel sampling scheme that enables us to guide the sampling process towards promising focal length values and avoids considering all possible values once a good pose is found. The resulting RANSAC variant is significantly faster than current state-of-the-art pose solvers, especially for low inlier ratios, while achieving a similar or better pose accuracy.


international symposium on mixed and augmented reality | 2015

Efficient Computation of Absolute Pose for Gravity-Aware Augmented Reality

Chris Sweeney; John Flynn; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer

We propose a novel formulation for determining the absolute pose of a single or multi-camera system given a known vertical direction. The vertical direction may be easily obtained by detecting the vertical vanishing points with computer vision techniques, or with the aid of IMU sensor measurements from a smartphone. Our solver is general and able to compute absolute camera pose from two 2D-3D correspondences for single or multi-camera systems. We run several synthetic experiments that demonstrate our algorithms improved robustness to image and IMU noise compared to the current state of the art. Additionally, we run an image localization experiment that demonstrates the accuracy of our algorithm in real-world scenarios. Finally, we show that our algorithm provides increased performance for real-time model-based tracking compared to solvers that do not utilize the vertical direction and show our algorithm in use with an augmented reality application running on a Google Tango tablet.


computer vision and pattern recognition | 2015

Computing similarity transformations from only image correspondences

Chris Sweeney; Laurent Kneip; Tobias Höllerer; Matthew Turk

We propose a novel solution for computing the relative pose between two generalized cameras that includes reconciling the internal scale of the generalized cameras. This approach can be used to compute a similarity transformation between two coordinate systems, making it useful for loop closure in visual odometry and registering multiple structure from motion reconstructions together. In contrast to alternative similarity transformation methods, our approach uses 2D-2D image correspondences thus is not subject to the depth uncertainty that often arises with 3D points. We utilize a known vertical direction (which may be easily obtained from IMU data or vertical vanishing point detection) of the generalized cameras to solve the generalized relative pose and scale problem as an efficient Quadratic Eigenvalue Problem. To our knowledge, this is the first method for computing similarity transformations that does not require any 3D information. Our experiments on synthetic and real data demonstrate that this leads to improved performance compared to methods that use 3D-3D or 2D-3D correspondences, especially as the depth of the scene increases.


IEEE Transactions on Visualization and Computer Graphics | 2014

Model Estimation and Selection towardsUnconstrained Real-Time Tracking and Mapping

Steffen Gauglitz; Chris Sweeney; Jonathan Ventura; Matthew Turk; Tobias Höllerer

We present an approach and prototype implementation to initialization-free real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, parallax-inducing as well as rotation-only motions. Our approach effectively behaves like a keyframe-based Simultaneous Localization and Mapping system or a panorama tracking and mapping system, depending on the camera movement. It seamlessly switches between the two modes and is thus able to track and map through arbitrary sequences of parallax-inducing and rotation-only camera movements. The system integrates both model-based and model-free tracking, automatically choosing between the two depending on the situation, and subsequently uses the “Geometric Robust Information Criterion” to decide whether the current camera motion can best be represented as a parallax-inducing motion or a rotation-only motion. It continues to collect and map data after tracking failure by creating separate tracks which are later merged if they are found to overlap. This is in contrast to most existing tracking and mapping systems, which suspend tracking and mapping and thus discard valuable data until relocalization with respect to the initial map is successful. We tested our prototype implementation on a variety of video sequences, successfully tracking through different camera motions and fully automatically building combinations of panoramas and 3D structure.


international conference on 3d vision | 2016

Large Scale SfM with the Distributed Camera Model

Chris Sweeney; Victor Fragoso; Tobias Höllerer; Matthew Turk

We introduce the distributed camera model, a novel model for Structure-from-Motion (SfM). This model describes image observations in terms of light rays with ray origins and directions rather than pixels. As such, the proposed model is capable of describing a single camera or multiple cameras simultaneously as the collection of all light rays observed. We show how the distributed camera model is a generalization of the standard camera model and we describe a general formulation and solution to the absolute camera pose problem that works for standard or distributed cameras. The proposed method computes a solution that is up to 8 times more efficient and robust to rotation singularities in comparison with gDLS[21]. Finally, this method is used in an novel large-scale incremental SfM pipeline where distributed cameras are accurately and robustly merged together. This pipeline is a direct generalization of traditional incremental SfM, however, instead of incrementally adding one camera at a time to grow the reconstruction the reconstruction is grown by adding a distributed camera. Our pipeline produces highly accurate reconstructions efficiently by avoiding the need for many bundle adjustment iterations and is capable of computing a 3D model of Rome from over 15,000 images in just 22 minutes.


international symposium on mixed and augmented reality | 2013

Improved outdoor augmented reality through “Globalization”

Chris Sweeney; Tobias Höllerer; Matthew Turk

Despite the major interest in live tracking and mapping (e.g., SLAM), the field of augmented reality has yet to truly make use of the rich data provided from large-scale reconstructions generated by structure from motion. This dissertation focuses on extensible tracking and mapping for large-scale reconstructions that enables SfM and SLAM to operate cooperatively to mutually enhance the performance. We describe a multi-user, collaborative augmented reality system that will collectively extend and enhance reconstructions of urban environments at city-scales. Contrary to current outdoor augmented reality systems, this system is capable of continuous tracking through areas previously modeled as well as new, undiscovered areas. Further, we describe a new process called globalization that propagates new visual information back to the global model. Globalization allows for continuous updating of the 3D models with visual data from live users, providing data to fill coverage gaps that are common in 3D reconstructions and to provide the most current view of an environment as it changes over time. The proposed research is a crucial step toward enabling users to augment urban environments with location-specific information at any location in the world for a truly global augmented reality.

Collaboration


Dive into the Chris Sweeney's collaboration.

Top Co-Authors

Avatar

Matthew Turk

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor Fragoso

University of California

View shared research outputs
Top Co-Authors

Avatar

Jonathan Ventura

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Kneip

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pradeep Sen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge