Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan Jacobs is active.

Publication


Featured researches published by Nathan Jacobs.


computer vision and pattern recognition | 2007

Consistent Temporal Variations in Many Outdoor Scenes

Nathan Jacobs; Nathaniel Roman; Robert Pless

This paper details an empirical study of large image sets taken by static cameras. These images have consistent correlations over the entire image and over time scales of days to months. Simple second-order statistics of such image sets show vastly more structure than exists in generic natural images or video from moving cameras. Using a slight variant to PCA, we can decompose all cameras into comparable components and annotate images with respect to surface orientation, weather, and seasonal change. Experiments are based on a data set from 538 cameras across the United States which have collected more than 17 million images over the the last 6 months.


international conference on computer vision | 2007

Geolocating Static Cameras

Nathan Jacobs; Scott Satkin; Nathaniel Roman; Richard Speyer; Robert Pless

A key problem in widely distributed camera networks is locating the cameras. This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery. We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene. We present results from a database of images from 538 cameras collected over the course of a year. We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location. In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions.


advances in geographic information systems | 2009

The global network of outdoor webcams: properties and applications

Nathan Jacobs; Walker Burgin; Nick Fridrich; Austin Abrams; Kylia Miskell; Bobby H. Braswell; Andrew D. Richardson; Robert Pless

There are thousands of outdoor webcams which offer live images freely over the Internet. We report on methods for discovering and organizing this already existing and massively distributed global sensor, and argue that it provides an interesting alternative to satellite imagery for global-scale remote sensing applications. In particular, we characterize the live imaging capabilities that are freely available as of the summer of 2009 in terms of the spatial distribution of the cameras, their update rate, and characteristics of the scene in view. We offer algorithms that exploit the fact that webcams are typically static to simplify the tasks of inferring relevant environmental and weather variables directly from image data. Finally, we show that organizing and exploiting the large, ad-hoc, set of cameras attached to the web can dramatically increase the data available for studying particular problems in phenology.


international conference on computer vision | 2015

Wide-Area Image Geolocalization with Aerial Reference Imagery

Scott Workman; Richard Souvenir; Nathan Jacobs

We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.


computer vision and pattern recognition | 2010

Using cloud shadows to infer scene structure and camera calibration

Nathan Jacobs; Brian Bies; Robert Pless

We explore the use of clouds as a form of structured lighting to capture the 3D structure of outdoor scenes observed over time from a static camera. We derive two cues that relate 3D distances to changes in pixel intensity due to clouds shadows. The first cue is primarily spatial, works with low frame-rate time lapses, and supports estimating focal length and scene structure, up to a scale ambiguity. The second cue depends on cloud motion and has a more complex, but still linear, ambiguity. We describe a method that uses the spatial cue to estimate a depth map and a method that combines both cues. Results on time lapses of several outdoor scenes show that these cues enable estimating scene geometry and camera focal length.


computer vision and pattern recognition | 2009

Adventures in archiving and using three years of webcam images

Nathan Jacobs; Walker Burgin; Richard Speyer; David G. Ross; Robert Pless

Recent descriptions of algorithms applied to images archived from webcams tend to underplay the challenges in working with large data sets acquired from uncontrolled webcams in real environments. In building a database of images captured from 1000 webcams, every 30 minutes for the last 3 years, we observe that these cameras have a wide variety of failure modes. This paper details steps we have taken to make this dataset more easily useful to the research community, including (a) tools for finding stable temporal segments, and stabilizing images when the camera is nearly stable, (b) visualization tools to quickly summarize a years worth of image data from one camera and to give a set of exemplars that highlight anomalies within the scene, and (c) integration with LabelMe, allowing labels of static features in one image of a scene to propagate to the thousands of other images of that scene. We also present proof-of-concept algorithms showing how this data conditioning supports several problems in inferring properties of the scene from image data.


computer vision and pattern recognition | 2013

Cloud Motion as a Calibration Cue

Nathan Jacobs; Mohammad T. Islam; Scott Workman

We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Time Scales in Video Surveillance

Nathan Jacobs; Robert Pless

Events in surveillance video occur over many time scales, but common approaches to background subtraction and video representation are implicitly based on a single temporal scale. In this work, we derive a set of causal filters which define a temporal scale-space representation for the activity at each pixel. This scale-space can be maintained and continuously updated in real time and, for static cameras viewing dynamic scenes, has several interesting properties. In particular, it directly characterizes interesting temporal features and supports approximate reconstruction of the video history under challenging noise conditions. The temporal scale-space grounds novel approaches to several applications, including a natural visualization tool to summarize recent video behavior in a single image, and a tool to directly report how long the object has been present in a scene without reexamining any video data.


workshop on applications of computer vision | 2011

Webcam geo-localization using aggregate light levels

Nathan Jacobs; Kylia Miskell; Robert Pless

We consider the problem of geo-locating static cameras from long-term time-lapse imagery. This problem has received significant attention recently, with most methods making strong assumptions on the geometric structure of the scene. We explore a simple, robust cue that relates overall image intensity to the zenith angle of the sun (which need not be visible). We characterize the accuracy of geolocation based on this cue as a function of different models of the zenith-intensity relationship and the amount of imagery available. We evaluate our algorithm on a dataset of more than 60 million images captured from outdoor webcams located around the globe. We find that using our algorithm with images sampled every 30 minutes, yields localization errors of less than 100 km for the majority of cameras.


international conference on acoustics, speech, and signal processing | 2010

Compressive sensing and differential image-motion estimation

Nathan Jacobs; Stephen Schuh; Robert Pless

Compressive-sensing cameras are an important new class of sensors that have different design constraints than standard cameras. Surprisingly, little work has explored the relationship between compressive-sensing measurements and differential image motion. We show that, given modest constraints on the measurements and image motions, we can omit the computationally expensive compressive-sensing reconstruction step and obtain more accurate motion estimates with significantly less computation time. We also formulate a compressive-sensing reconstruction problem that incorporates known image motion and show that this method outperforms the state-of-the-art in compressive-sensing video reconstruction.

Collaboration


Dive into the Nathan Jacobs's collaboration.

Top Co-Authors

Avatar

Robert Pless

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Austin Abrams

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Dixon

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge