Nathan Longbotham
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nathan Longbotham.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012
Nathan Longbotham; Fabio Pacifici; Taylor C. Glenn; Alina Zare; Michele Volpi; Devis Tuia; Emmanuel Christophe; Julien Michel; Jordi Inglada; Jocelyn Chanussot; Qian Du
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.
IEEE Transactions on Geoscience and Remote Sensing | 2012
Nathan Longbotham; Chuck Chaapel; Laurence Bleiler; Christopher Padwick; William J. Emery; Fabio Pacifici
The high-performance camera control systems carried aboard the DigitalGlobe WorldView satellites, WorldView-1 and WorldView-2, are capable of rapid retargeting and high off-nadir imagery collection. This provides the capability to collect dozens of multiangle very high spatial resolution images over a large target area during a single overflight. In addition, WorldView-2 collects eight bands of multispectral data. This paper discusses the improvements in urban classification accuracy available through utilization of the spatial and spectral information from a WorldView-2 multiangle image sequence collected over Atlanta, GA, in December 2009. Specifically, the implications of adding height data and multiangle multispectral reflectance, both derived from the multiangle sequence, to the textural, morphological, and spectral information of a single WorldView-2 image are investigated. The results show an improvement in classification accuracy of 27% and 14% for the spatial and spectral experiments, respectively. Additionally, the multiangle data set allows the differentiation of classes not typically well identified by a single image, such as skyscrapers and bridges as well as flat and pitched roofs.
IEEE Transactions on Geoscience and Remote Sensing | 2014
Fabio Pacifici; Nathan Longbotham; William J. Emery
The analysis of multitemporal very high spatial resolution imagery is too often limited to the sole use of pixel digital numbers which do not accurately describe the observed targets between the various collections due to the effects of changing illumination, viewing geometries, and atmospheric conditions. This paper demonstrates both qualitatively and quantitatively that not only physically based quantities are necessary to consistently and efficiently analyze these data sets but also the angular information of the acquisitions should not be neglected as it can provide unique features on the scenes being analyzed. The data set used is composed of 21 images acquired between 2002 and 2009 by QuickBird over the city of Denver, Colorado. The images were collected near the downtown area and include single family houses, skyscrapers, apartment complexes, industrial buildings, roads/highways, urban parks, and bodies of water. Experiments show that atmospheric and geometric properties of the acquisitions substantially affect the pixel values and, more specifically, that the raw counts are significantly correlated to the atmospheric visibility. Results of a 22-class urban land cover experiment show that an improvement of 0.374 in terms of Kappa coefficient can be achieved over the base case of raw pixels when surface reflectance values are combined to the angular decomposition of the time series.
IEEE Geoscience and Remote Sensing Letters | 2015
Heng-Chao Li; Turgay Celik; Nathan Longbotham; William J. Emery
In this letter, we propose a simple yet effective unsupervised change detection approach for multitemporal synthetic aperture radar images from the perspective of clustering. This approach jointly exploits the robust Gabor wavelet representation and the advanced cascade clustering. First, a log-ratio image is generated from the multitemporal images. Then, to integrate contextual information in the feature extraction process, Gabor wavelets are employed to yield the representation of the log-ratio image at multiple scales and orientations, whose maximum magnitude over all orientations in each scale is concatenated to form the Gabor feature vector. Next, a cascade clustering algorithm is designed in this discriminative feature space by successively combining the first-level fuzzy c-means clustering with the second-level nearest neighbor rule. Finally, the two-level combination of the changed and unchanged results generates the final change map. Experimental results are presented to demonstrate the effectiveness of the proposed approach.
urban remote sensing joint event | 2011
Nathan Longbotham; Chad Bleiler; Chuck Chaapel; Chris Padwick; William J. Emery; Fabio Pacifici
In this study, we investigate the ability of the spectral data from a multi-angle WorldView-2 image sequence to improve classification accuracy of an urban scene. Specifically, we investigate the multi-angle reflectance, as well as two data extraction methods applied to the reflectance data, developed from thirteen images collected over downtown Atlanta, GA in Dec. 2009. These images were collected sequentially by WorldView-2 within two minutes and range from approximately 30 degrees off-nadir southward to 30 degrees off-nadir northward.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2014
Nathan Longbotham; Fabio Pacifici; Bill Baugh; Gustavo Camps-Valls
The upcoming WorldView-3 satellite is designed to collect unique data by combining very-high spatial resolution (VHR) with observation bands in the short wave infrared (SWIR) in addition to the visible and near-infrared (VNIR) multispectral and panchromatic bands currently available on the VHR WorldView-2 system. These SWIR bands were specifically selected in order to target unique reflectance and absorption features presented by various surface materials and should, therefore, significantly improve the platforms information content for many image mining applications. This presentation explores the information content available to the WorldView-3 platform in two ways. First, second-order statistics and mutual information estimates are utilized to measure the spectral content of simulated WorldView-3, WorldView-2, and QuickBird data relative to AVIRIS hyperspectral imagery. Then, WorldView-3 supervised classification performance is explored relative to that of hyperspectral imagery for both urban and agricultural data sets. Results suggest that the additional spectral content of the WorldView-3 platform provides a competitive information source to that of hyperspectral for broad applications in agricultural, mineral exploration, or urban monitoring.
Proceedings of SPIE | 2017
Daniela I. Moody; Steven P. Brumby; Rick Chartrand; Ryan Keisler; Nathan Longbotham; Carly Mertes; Samuel W. Skillman; Michael S. Warren
The increase in performance, availability, and coverage of multispectral satellite sensor constellations has led to a drastic increase in data volume and data rate. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. The data analysis capability, however, has lagged behind storage and compute developments, and has traditionally focused on individual scene processing. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and can scale with the high-rate and dimensionality of imagery being collected. We investigate and compare the performance of pixel-level crop identification using tree-based classifiers and its dependence on both temporal and spectral features. Classification performance is assessed using as ground-truth Cropland Data Layer (CDL) crop masks generated by the US Department of Agriculture (USDA). The CDL maps contain 30m spatial resolution, pixel-level labels for around 200 categories of land cover, but are however only available post-growing season. The analysis focuses on McCook county in South Dakota and shows crop classification using a temporal stack of Landsat 8 (L8) imagery over the growing season, from April through October. Specifically, we consider the temporal L8 stack depth, as well as different normalized band difference indices, and evaluate their contribution to crop identification. We also show an extension of our algorithm to map corn and soy crops in the state of Mato Grosso, Brazil.
Fourier Transform Spectroscopy and Hyperspectral Imaging and Sounding of the Environment (2015), paper HW3B.2 | 2015
Nathan Longbotham; Fabio Pacifici; Seth Malitz; William M. Baugh; Gustau Camps-Valls
The new WorldView-3 satellite provides a unique combination of very high spatial resolution and super-spectral capabilities. This presentation explores the practical and theoretical usefulness of this platform as compared against other hyperspectral and multispectral sensors.
international geoscience and remote sensing symposium | 2013
Nathan Longbotham; William J. Emery; Fabio Pacifici
Multi-temporal multi-angle data provides multiple images, collected over both time and satellite view-angle, of a single target area. In the presented research, these data are used to explore the fundamental physical distortions present in very-high spatial resolution optical data as applied to land-use/land-cover classification. This is done by creating a land-use/land-cover model in one image of the multi-temporal multi-angle data and directly applying it to the remaining images. This direct measure of model portability provides unique insight into the dependence of very-high spatial resolution land-use/land-cover classification on the atmosphere, solar illumination, and model training location.
international geoscience and remote sensing symposium | 2012
Fabio Pacifici; Nathan Longbotham; William J. Emery
Despite the fact that commercial optical very high spatial resolution satellite imagery has been available for more than 10 years, very little research has been done to take advantage of its multi-temporal and multi-angular information. In this paper, the benefits of using surface reflectance for the analysis of multi-temporal and multi-angular images are discussed using a 23 image time-series acquired between 2002 and 2010 by QuickBird and WorldView-2 over the city of Denver, Colorado. Results show that it is possible to extract useful information from multi-angular data regarding the structure of specific objects.