Jared Heinly
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jared Heinly.
european conference on computer vision | 2012
Jared Heinly; Enrique Dunn; Jan Michael Frahm
Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets www.cs.unc.edu/feature-evaluation.
computer and communications security | 2013
Yi Xu; Jared Heinly; Andrew M. White; Fabian Monrose; Jan Michael Frahm
Of late, threats enabled by the ubiquitous use of mobile devices have drawn much interest from the research community. However, prior threats all suffer from a similar, and profound, weakness - namely the requirement that the adversary is either within visual range of the victim (e.g., to ensure that the pop-out events in reflections in the victims sunglasses can be discerned) or is close enough to the target to avoid the use of expensive telescopes. In this paper, we broaden the scope of the attacks by relaxing these requirements and show that breaches of privacy are possible even when the adversary is around a corner. The approach we take overcomes challenges posed by low image resolution by extending computer vision methods to operate on small, high-noise, images. Moreover, our work is applicable to all types of keyboards because of a novel application of fingertip motion analysis for key-press detection. In doing so, we are also able to exploit reflections in the eyeball of the user or even repeated reflections (i.e., a reflection of a reflection of the mobile device in the eyeball of the user). Our empirical results show that we can perform these attacks with high accuracy, and can do so in scenarios that aptly demonstrate the realism of this threat.
computer vision and pattern recognition | 2015
Jared Heinly; Johannes L. Schönberger; Enrique Dunn; Jan Michael Frahm
We propose a novel, large-scale, structure-from-motion framework that advances the state of the art in data scalability from city-scale modeling (millions of images) to world-scale modeling (several tens of millions of images) using just a single computer. The main enabling technology is the use of a streaming-based framework for connected component discovery. Moreover, our system employs an adaptive, online, iconic image clustering approach based on an augmented bag-of-words representation, in order to balance the goals of registration, comprehensiveness, and data compactness. We demonstrate our proposal by operating on a recent publicly available 100 million image crowd-sourced photo collection containing images geographically distributed throughout the entire world. Results illustrate that our streaming-based approach does not compromise model completeness, but achieves unprecedented levels of efficiency and scalability.
european conference on computer vision | 2014
Jared Heinly; Enrique Dunn; Jan Michael Frahm
Structure from motion (SfM) is a common technique to recover 3D geometry and camera poses from sets of images of a common scene. In many urban environments, however, there are symmetric, repetitive, or duplicate structures that pose challenges for SfM pipelines. The result of these ambiguous structures is incorrectly placed cameras and points within the reconstruction. In this paper, we present a post-processing method that can not only detect these errors, but successfully resolve them. Our novel approach proposes the strong and informative measure of conflicting observations, and we demonstrate that it is robust to a large variety of scenes.
ieee/ion position, location and navigation symposium | 2014
Alberico Menozzi; Brian Clipp; Eric Wenger; Jared Heinly; Enrique Dunn; Herman Towles; Jan Michael Frahm; Gregory F. Welch
This paper describes the development of vision-aided navigation (i.e., pose estimation) for a wearable augmented reality system operating in natural outdoor environments. This system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered graphics on the users view of reality. Accurate pose estimation is achieved through integration of inertial, magnetic, GPS, terrain elevation data, and computervision inputs. Specifically, a helmet-mounted forward-looking camera and custom computer vision algorithms are used to provide measurements of absolute orientation (i.e., orientation of the helmet with respect to the Earth). These orientation measurements, which leverage mountainous terrain horizon geometry and/or known landmarks, enable the system to achieve significant improvements in accuracy compared to GPS/INS solutions of similar size, weight, and power, and to operate robustly in the presence of magnetic disturbances. Recent field testing activities, across a variety of environments where these vision-based signals of opportunity are available, indicate that high accuracy (less than 10 mrad) in graphics geo-registration can be achieved. This paper presents the pose estimation process, the methods behind the generation of vision-based measurements, and representative experimental results.
Geo-spatial Information Science | 2013
Jan Michael Frahm; Jared Heinly; Enliang Zheng; Enrique Dunn; Pierre Fite-Georgel; Marc Pollefeys
In this article we present our system for scalable, robust, and fast city-scale reconstruction from Internet photo collections (IPC) obtaining geo-registered dense 3D models. The major achievements of our system are the efficient use of coarse appearance descriptors combined with strong geometric constraints to reduce the computational complexity of the image overlap search. This unique combination of recognition and geometric constraints allows our method to reduce from quadratic complexity in the number of images to almost linear complexity in the IPC size. Accordingly, our 3D-modeling framework is inherently better scalable than other state of the art methods and in fact is currently the only method to support modeling from millions of images. In addition, we propose a novel mechanism to overcome the inherent scale ambiguity of the reconstructed models by exploiting geo-tags of the Internet photo collection images and readily available StreetView panoramas for fully automatic geo-registration of the 3D model. Moreover, our system also exploits image appearance clustering to tackle the challenge of computing dense 3D models from an image collection that has significant variation in illumination between images along with a wide variety of sensors and their associated different radiometric camera parameters. Our algorithm exploits the redundancy of the data to suppress estimation noise through a novel depth map fusion. The fusion simultaneously exploits surface and free space constraints during the fusion of a large number of depth maps. Cost volume compression during the fusion achieves lower memory requirements for high-resolution models. We demonstrate our system on a variety of scenes from an Internet photo collection of Berlin containing almost three million images from which we compute dense models in less than the span of a day on a single computer.
Journal of Graphics Tools | 2009
Jared Heinly; Shawn Recker; Kevin Bensema; Jesse Porch; Christiaan P. Gribble
Despite nearly universal support for the IEEE 754 floating-point standard on modern general-purpose processors, a wide variety of more specialized processors do not provide hardware floating-point units and rely instead on integer-only pipelines. Ray tracing on these platforms thus requires an integer rendering process. Toward this end, we clarify the details of an existing fixed-point ray/triangle intersection method, provide an annotated implementation of that method in C++, introduce two refinements that lead to greater flexibility and improved accuracy, and highlight the issues necessary to implement common material models in an integer-only context. Finally, we provide the source code for a template-based integer/floating-point ray tracer to serve as a testbed for additional experimentation with integer ray tracing methods.
international conference on 3d vision | 2014
Jared Heinly; Enrique Dunn; Jan Michael Frahm
Structure-from-motion (SFM) is widely utilized to generate 3D reconstructions from unordered photo-collections. However, in the presence of non unique, symmetric, or otherwise indistinguishable structure, SFM techniques often incorrectly reconstruct the final model. We propose a method that not only determines if an error is present, but automatically corrects the error in order to produce a correct representation of the scene. We find that by exploiting the co-occurrence information present in the scenes geometry, we can successfully isolate the 3D points causing the incorrect result. This allows us to split an incorrect reconstruction into error-free sub-models that we then correctly merge back together. Our experimental results show that our technique is efficient, robust to a variety of scenes, and outperforms existing methods.
computer vision and pattern recognition | 2015
Jared Heinly; Johannes L. Schönberger; Enrique Dunn; Jan Michael Frahm
Archive | 2016
David Boardman; Brian Clipp; Charles Erignac; Jan Michael Frahm; Jared Heinly; Srinivas Kapaganty