Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Enrique Dunn is active.

Publication


Featured researches published by Enrique Dunn.


european conference on computer vision | 2012

Comparative Evaluation of Binary Features

Jared Heinly; Enrique Dunn; Jan Michael Frahm

Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets www.cs.unc.edu/feature-evaluation.


Pattern Recognition Letters | 2006

Parisian camera placement for vision metrology

Enrique Dunn; Gustavo Olague; Evelyne Lutton

This paper presents a novel camera network design methodology based on the Parisian evolutionary computation approach. This methodology proposes to partition the original problem into a set of homogeneous elements, whose individual contribution to the problem solution can be evaluated separately. A population comprised of these homogeneous elements is evolved with the goal of creating a single solution by a process of aggregation. The goal of the Parisian evolutionary process is to locally build better individuals that jointly form better global solutions. The implementation of the proposed approach requires addressing aspects such as problem decomposition and representation, local and global fitness integration, as well as diversity preservation mechanisms. The benefit of applying the Parisian approach to our camera placement problem is a substantial reduction in computational effort expended in the evolutionary optimization process. Moreover, experimental results coincide with previous state of the art photogrammetric network design methodologies, while incurring in only a fraction of the computational cost.


british machine vision conference | 2009

Next best view planning for active model improvement

Enrique Dunn; Jan Michael Frahm

We propose a novel approach to determining the Next Best View (NBV) for the task of efficiently building highly accurate 3D models from images. Our proposed method deploys a hierarchical uncertainty driven model refinement process designed to select vantage viewpoints based on the model’s covariance structure and appearance, as well as the camera characteristics. The developed NBV planning system incrementally builds a sensing strategy by sequentially finding the single camera placement, which best reduces an existing model’s 3D uncertainty. The generic nature of our system’s design and internal data representation makes it well suited to be applied to a wide variety of 3D modeling algorithms. It can be used within active computer vision systems as well as for optimized view selection from the set of available views. Experimental results are presented to illustrate the effectiveness and versatility of our approach.


intelligent robots and systems | 2009

Developing visual sensing strategies through next best view planning

Enrique Dunn; Jur van den Berg; Jan Michael Frahm

We propose an approach for acquiring geometric 3D models using cameras mounted on autonomous vehicles and robots. Our method uses structure from motion techniques from computer vision to obtain the geometric structure of the scene. To achieve an efficient goal-driven resource deployment, we develop an incremental approach, which alternates between an accuracy-driven next best view determination and recursive path planning. The next best view is determined by a novel cost function that quantifies the expected contribution of future viewing configurations. A sensing path for robot motion towards the next best view is then achieved by a cost-driven recursive search of intermediate viewing configurations. We discuss some of the properties of our view cost function in the context of an iterative view planning process and present experimental results on a synthetic environment.


computer vision and pattern recognition | 2015

Reconstructing the world* in six days

Jared Heinly; Johannes L. Schönberger; Enrique Dunn; Jan Michael Frahm

We propose a novel, large-scale, structure-from-motion framework that advances the state of the art in data scalability from city-scale modeling (millions of images) to world-scale modeling (several tens of millions of images) using just a single computer. The main enabling technology is the use of a streaming-based framework for connected component discovery. Moreover, our system employs an adaptive, online, iconic image clustering approach based on an augmented bag-of-words representation, in order to balance the goals of registration, comprehensiveness, and data compactness. We demonstrate our proposal by operating on a recent publicly available 100 million image crowd-sourced photo collection containing images geographically distributed throughout the entire world. Results illustrate that our streaming-based approach does not compromise model completeness, but achieves unprecedented levels of efficiency and scalability.


intelligent robots and systems | 2005

Pareto optimal camera placement for automated visual inspection

Enrique Dunn; Gustavo Olague

In this work the problem of camera placement for automated visual inspection is studied under a multi-objective framework. Reconstruction accuracy and operational costs are incorporated into our methodology as separate criteria to optimize. Our approach is based on the initial assumption of conflict among the considered objectives. Hence, the expected results are in the form of Pareto optimal compromise solutions. In order to solve our optimization problem an evolutionary based technique is implemented. Experimental results confirm the conflict among the considered objectives and offer important insights into the relationships between solution quality and process efficiency for high-accurate 3D reconstruction systems.


computer vision and pattern recognition | 2014

PatchMatch Based Joint View Selection and Depthmap Estimation

Enliang Zheng; Enrique Dunn; Vladimir Jojic; Jan Michael Frahm

We propose a multi-view depthmap estimation approach aimed at adaptively ascertaining the pixel level data associations between a reference image and all the elements of a source image set. Namely, we address the question, what aggregation subset of the source image set should we use to estimate the depth of a particular pixel in the reference image? We pose the problem within a probabilistic framework that jointly models pixel-level view selection and depthmap estimation given the local pairwise image photoconsistency. The corresponding graphical model is solved by EM-based view selection probability inference and PatchMatch-like depth sampling and propagation. Experimental results on standard multi-view benchmarks convey the state-of-the art estimation accuracy afforded by mitigating spurious pixel level data associations. Additionally, experiments on large Internet crowd sourced data demonstrate the robustness of our approach against unstructured and heterogeneous image capture characteristics. Moreover, the linear computational and storage requirements of our formulation, as well as its inherent parallelism, enables an efficient and scalable GPU-based implementation.


international symposium on mixed and augmented reality | 2014

P-HRTF: Efficient personalized HRTF computation for high-fidelity spatial sound

Alok Meshram; Ravish Mehra; Hongsheng Yang; Enrique Dunn; Jan Michael Franm; Dinesh Manocha

Accurate rendering of 3D spatial audio for interactive virtual auditory displays requires the use of personalized head-related transfer functions (HRTFs). We present a new approach to compute personalized HRTFs for any individual using a method that combines state-of-the-art image-based 3D modeling with an efficient numerical simulation pipeline. Our 3D modeling framework enables capture of the listeners head and torso using consumer-grade digital cameras to estimate a high-resolution non-parametric surface representation of the head, including the extended vicinity of the listeners ear. We leverage sparse structure from motion and dense surface reconstruction techniques to generate a 3D mesh. This mesh is used as input to a numeric sound propagation solver, which uses acoustic reciprocity and Kirchhoff surface integral representation to efficiently compute an individuals personalized HRTF. The overall computation takes tens of minutes on multi-core desktop machine. We have used our approach to compute the personalized HRTFs of few individuals, and we present our preliminary evaluation here. To the best of our knowledge, this is the first commodity technique that can be used to compute personalized HRTFs in a lab or home setting.


european conference on computer vision | 2014

Correcting for Duplicate Scene Structure in Sparse 3D Reconstruction

Jared Heinly; Enrique Dunn; Jan Michael Frahm

Structure from motion (SfM) is a common technique to recover 3D geometry and camera poses from sets of images of a common scene. In many urban environments, however, there are symmetric, repetitive, or duplicate structures that pose challenges for SfM pipelines. The result of these ambiguous structures is incorrectly placed cameras and points within the reconstruction. In this paper, we present a post-processing method that can not only detect these errors, but successfully resolve them. Our novel approach proposes the strong and informative measure of conflicting observations, and we demonstrate that it is robust to a large variety of scenes.


Lecture Notes in Computer Science | 2004

Multi-objective Sensor Planning for Efficient and Accurate Object Reconstruction

Enrique Dunn; Gustavo Olague

A novel approach for sensor planning, which incorporates multi-objective optimization principals into the autonomous design of sensing strategies, is presented. The study addresses planning the behavior of an automated 3D inspection system, consisting of a manipulator robot in an Eye-on-Hand configuration. Task planning in this context is stated as a constrained multi-objective optimization problem, where reconstruction accuracy and robot motion efficiency are the criteria to optimize. An approach based on evolutionary computation is developed and experimental results shown. The obtained convex Pareto front of solutions confirms the conflict among objectives in our planning.

Collaboration


Dive into the Enrique Dunn's collaboration.

Top Co-Authors

Avatar

Jan Michael Frahm

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ke Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Enliang Zheng

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jared Heinly

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dinghuang Ji

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Brian Clipp

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yilin Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyo Jin Kim

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Pierre Fite-Georgel

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge