Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shawn Recker is active.

Publication


Featured researches published by Shawn Recker.


workshop on applications of computer vision | 2013

Statistical angular error-based triangulation for efficient and accurate multi-view scene reconstruction

Shawn Recker; Mauricio Hess-Flores; Kenneth I. Joy

This paper presents a framework for N-view triangulation of scene points, which improves processing time and final reprojection error with respect to standard methods, such as linear triangulation. The framework introduces an angular error-based cost function, which is robust to outliers and inexpensive to compute, and designed such that simple adaptive gradient descent can be applied for convergence. Our method also presents a statistical sampling component based on confidence levels, that reduces the number of rays to be used for triangulation of a given feature track. It is shown how the statistical component yields a meaningful yet much reduced set of representative rays for triangulation, and how the application of the cost function on the reduced sample can efficiently yield faster and more accurate solutions. Results are demonstrated on real and synthetic data, where it is proven to significantly increase the speed of triangulation and optimize reprojection error in most cases. This makes it especially attractive for efficient triangulation of large scenes given the speed and low memory requirements.


vision modeling and visualization | 2012

Visualization of Scene Structure Uncertainty in a Multi-View Reconstruction Pipeline

Shawn Recker; Mauricio Hess-Flores; Mark A. Duchaineau; Kenneth I. Joy

This paper presents a novel, interactive visualization tool that allows for the analysis of scene structure uncertainty and its sensitivity to parameters in different multi-view scene reconstruction stages. Given a set of input cameras and feature tracks, the volume rendering-based approach first creates a scalar field from angular error measurements. The obtained statistical, visual, and isosurface information provides insight into the sensitivity of scene structure at the stages leading up to structure computation, such as frame decimation, feature tracking, and self-calibration. Furthermore, user interaction allows for such an analysis in ways that have traditionally been achieved mathematically, without any visual aid. Results are shown for different types of camera configurations, where it is discussed for example how over-decimation can be detected using the proposed technique, and how feature tracking inaccuracies have a stronger impact on scene structure than the camera’s intrinsic parameters.


international conference on pattern recognition | 2014

Uncertainty, Baseline, and Noise Analysis for L1 Error-Based Multi-view Triangulation

Mauricio Hess-Flores; Shawn Recker; Kenneth I. Joy

A comprehensive uncertainty, baseline, and noise analysis in computing 3D points using a recent L1-based triangulation algorithm is presented. This method is shown to be not only faster and more accurate than its main competitor, linear triangulation, but also more stable under noise and baseline changes. A Monte Carlo analysis of covariance and a confidence ellipsoid analysis were performed over a large range of baselines and noise levels for different camera configurations, to compare performance between angular error-based and linear triangulation. Furthermore, the effect of baseline and noise was analyzed for true multi-view triangulation versus pair wise stereo fusion. Results on real and synthetic data show that L1 angular error-based triangulation has a positive effect on confidence ellipsoids, lowers covariance values and results in more-accurate pair wise and multi-view triangulation, for varying numbers of cameras and configurations.


applied imagery pattern recognition workshop | 2014

Depth data assisted structure-from-motion parameter optimization and feature track correction

Shawn Recker; Christiaan P. Gribble; Mikhail M. Shashkov; Mario Yepez; Mauricio Hess-Flores; Kenneth I. Joy

Structure-from-Motion (SfM) applications attempt to reconstruct the three-dimensional (3D) geometry of an underlying scene from a collection of images, taken from various camera viewpoints. Traditional optimization techniques in SfM, which compute and refine camera poses and 3D structure, rely only on feature tracks, or sets of corresponding pixels, generated from color (RGB) images. With the abundance of reliable depth sensor information, these optimization procedures can be augmented to increase the accuracy of reconstruction. This paper presents a general cost function, which evaluates the quality of a reconstruction based upon a previously established angular cost function and depth data estimates. The cost function takes into account two error measures: first, the angular error between each computed 3D scene point and its corresponding feature track location, and second, the difference between the sensor depth value and its computed estimate. A bundle adjustment parameter optimization is implemented using the proposed cost function and evaluated for accuracy and performance. As opposed to traditional bundle adjustment, in the event of feature tracking errors, a corrective routine is also present to detect and correct inaccurate feature tracks. The filtering algorithm involves clustering depth estimates of the same scene point and observing the difference between the depth point estimates and the triangulated 3D point. Results on both real and synthetic data are presented and show that reconstruction accuracy is improved.


workshop on applications of computer vision | 2014

GPU-accelerated and efficient multi-view triangulation for scene reconstruction

Jason Mak; Mauricio Hess-Flores; Shawn Recker; John D. Owens; Kenneth I. Joy

This paper presents a framework for GPU-accelerated N-view triangulation in multi-view reconstruction that improves processing time and final reprojection error with respect to methods in the literature. The framework uses an algorithm based on optimizing an angular error-based L1 cost function and it is shown how adaptive gradient descent can be applied for convergence. The triangulation algorithm is mapped onto the GPU and two approaches for parallelization are compared: one thread per track and one thread block per track. The better performing approach depends on the number of tracks and the lengths of the tracks in the dataset. Furthermore, the algorithm uses statistical sampling based on confidence levels to successfully reduce the quantity of feature track positions needed to triangulate an entire track. Sampling aids in load balancing for the GPUs SIMD architecture and for exploiting the GPUs memory hierarchy. When compared to a serial implementation, a typical performance increase of 3-4× can be achieved on a 4-core CPU. On a GPU, large track numbers are favorable and an increase of up to 40× can be achieved. Results on real and synthetic data prove that reprojection errors are similar to the best performing current triangulation methods but costing only a fraction of the computation time, allowing for efficient and accurate triangulation of large scenes.


international conference on computer vision | 2013

Fury of the Swarm: Efficient and Very Accurate Triangulation for Multi-view Scene Reconstruction

Shawn Recker; Mauricio Hess-Flores; Kenneth I. Joy

This paper presents a novel framework for practical and accurate N-view triangulation of scene points. The algorithm is based on applying swarm optimization inside a robustly-computed bounding box, using an angular error-based L1 cost function which is more robust to outliers and less susceptible to local minima than cost functions such as L2 on reprojection error. Extensive testing on synthetic data with ground-truth has determined an accurate position over 99.9% of the time, on thousands of camera configurations with varying degrees of feature tracking errors. Opposed to existing polynomial methods developed for a small number of cameras, the proposed algorithm is at best linear in the number of cameras and does not suffer from inaccuracies inherent in solving high-order polynomials or Grobner bases. In the specific case of three views, there is a two to three order of magnitude performance increase with respect to such methods. Results are provided to highlight performance for arbitrary camera configurations, numbers of cameras and under noise, which has not been previously achieved in the triangulation literature. Results on real data also prove that reprojection error is improved with respect to other methods.


applied imagery pattern recognition workshop | 2015

Efficient dense reconstruction using geometry and image consistency constraints

Mikhail M. Shashkov; Jason Mak; Shawn Recker; Connie S. Nguyen; John D. Owens; Kenneth I. Joy

We introduce a method for creating very dense reconstructions of datasets, particularly turn-table varieties. The method takes in initial reconstructions (of any origin) and makes them denser by interpolating depth values in two-dimensional image space within a superpixel region and then optimizing the interpolated value via image consistency analysis across neighboring images in the dataset. One of the core assumptions in this method is that depth values per pixel will vary gradually along a gradient for a given object. As such, turntable datasets, such as the dinosaur dataset, are particularly easy for our method. Our method modernizes some existing techniques and parallelizes them on a GPU, which produces results faster than other densification methods.


asian conference on computer vision | 2014

A Comparative Study of GPU-Accelerated Multi-view Sequential Reconstruction Triangulation Methods for Large-Scale Scenes

Jason Mak; Mauricio Hess-Flores; Shawn Recker; John D. Owens; Kenneth I. Joy

The angular error-based triangulation method and the parallax path method are both high-performance methods for large-scale multi-view sequential reconstruction that can be parallelized on the GPU. We map parallax paths to the GPU and test its performance and accuracy as a triangulation method for the first time. To this end, we compare it with the angular method on the GPU for both performance and accuracy. Furthermore, we improve the recovery of path scales and perform more extensive analysis and testing compared with the original parallax paths method. Although parallax paths requires sequential and piecewise-planar camera positions, in such scenarios, we can achieve a speedup of up to 14x over angular triangulation, while maintaining comparable accuracy.


applied imagery pattern recognition workshop | 2013

Feature track summary visualization for sequential multi-view reconstruction

Shawn Recker; Mauricio Hess-Flores; Kenneth I. Joy

Analyzing sources and causes of error in multi-view scene reconstruction is difficult. In the absence of any ground-truth information, reprojection error is the only valid metric to assess error. Unfortunately, inspecting reprojection error values does not allow computer vision researchers to attribute a cause to the error. A visualization technique to analyze errors in sequential multi-view reconstruction is presented. By computing feature track summaries, researchers can easily observe the progression of feature tracks through a set of frames over time. These summaries easily isolate poor feature tracks and allow the observer to infer the cause of a delinquent track. This visualization technique allows computer vision researchers to analyze errors in ways previously unachieved. It allows for a visual performance analysis and comparison between feature trackers, a previously unachieved result in the computer vision literature. This framework also provides the foundation to a number of novel error detection and correction algorithms.


applied imagery pattern recognition workshop | 2012

Visualization of scene structure uncertainty in multi-view reconstruction

Shawn Recker; Mauricio Hess-Flores; Mark A. Duchaineau; Kenneth I. Joy

This paper presents an interactive visualization system, based upon previous work, that allows for the analysis of scene structure uncertainty and its sensitivity to parameters in different multi-view scene reconstruction stages. Given a set of input cameras and feature tracks, the volume rendering-based approach creates a scalar field from reprojection error measurements. The obtained statistical, visual, and isosurface information provides insight into the sensitivity of scene structure at the stages leading up to structure computation, such as frame decimation, feature tracking, and self-calibration. Furthermore, user interaction allows for such an analysis in ways that have traditionally been achieved mathematically, without any visual aid. Results are shown for different types of camera configurations for real and synthetic data as well as compared to prior work.

Collaboration


Dive into the Shawn Recker's collaboration.

Top Co-Authors

Avatar

Kenneth I. Joy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Mak

University of California

View shared research outputs
Top Co-Authors

Avatar

John D. Owens

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark A. Duchaineau

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Yepez

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge