Robert S. Rand
National Geospatial-Intelligence Agency
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert S. Rand.
IEEE Transactions on Geoscience and Remote Sensing | 2003
Robert S. Rand; Daniel M. Keenan
A Bayesian approach to partitioning hyperspectral imagery into homogeneous regions is investigated, where spatial consistency is imposed on the spectral content of sites in each partition. An energy function is investigated that models disparities in an image that are defined with respect to a local neighborhood system. This energy function uses one or certain combinations of the spectral angle, Euclidean distance, and/or Kolmogorov-Smirnov (mean-adjusted) measures. Maximum a posteriori estimates are computed using an algorithm that is implemented as a multigrid process to improve global labeling and reduce computational intensity. Both constrained and unconstrained multigrid approaches are considered. A locally extended neighborhood structure is introduced with the intention of encouraging more accurate global labeling. The present effort is focused on terrain mapping applications using hyperspectral imagery containing narrow bands throughout the 400-2500-nm spectral region. The trials of our experiment are conducted on a scene from HYDICE 210-band imagery collected over an area that contains a diverse range of terrain features and that is supported with ground truth. Quantitative measures of local consistency (smoothness) and global labeling, along with class maps, demonstrate the benefits of applying this method for unsupervised and supervised classification, where the best results are achieved with an energy function consisting of the combined spectral angle and Euclidean distance measures.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV | 2008
Yu-Cherng Channing Chang; Hsuan Ren; Chein-I Chang; Robert S. Rand
Many hyperspectral imaging algorithms are available for applications such as spectral unmixing, subpixel detection, quantification, endmember extraction, classification, compression, etc and many more are yet to come. It is very difficult to evaluate and validate different algorithms developed and designed for the same application. This paper makes an attempt to design a set of standardized synthetic images which simulate various scenarios so that different algorithms can be validated and evaluated on the same ground with completely controllable environments. Two types of scenarios are developed to simulate how a target can be inserted into the image background. One is called Target Implantation (TI) which implants a target pixel by removing the background pixel it intends to replace. This type of scenarios is of particular interest in endmember extraction where pure signatures can be simulated and inserted into the background with guaranteed 100% purity. The other is called Target Embeddedness (TE) which embeds a target pixel by adding this target pixel to the background pixel it intends to insert. This type of scenarios can be used to simulate signal detection models where the noise is additive. For each of both types three scenarios are designed to simulate different levels of target knowledge by adding a Gaussian noise. In order to make these six scenarios a standardized data set for experiments, the data used to generate synthetic images can be chosen from a data base or spectral library available in the public domain or websites and no particular data are required to simulate these synthetic images. By virtue of the designed six scenarios an algorithm can be assessed objectively and compared fairly to other algorithms on the same setting. This paper demonstrates how these six scenarios can be used to evaluate various algorithms in applications of subpixel detection, mixed pixel classification/quantification and endmember extraction.
Proceedings of SPIE | 2013
Robert S. Rand; Amit Banerjee; Joshua B. Broadwater
Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.
international geoscience and remote sensing symposium | 2012
Alexey Castrodad; Timothy Khuon; Robert S. Rand; Guillermo Sapiro
Several studies suggest that the use of geometric features along with spectral information improves the classification and visualization quality of hyperspectral imagery. These studies normally make use of spatial neighborhoods of hyperspectral pixels for extracting these geometric features. In this work, we merge point cloud Light Detection and Ranging (LiDAR) data and hyperspectral imagery (HSI) into a single sparse modeling pipeline for subpixel mapping and classification. The model accounts for material variability and noise by using learned dictionaries that act as spectral endmembers. Additionally, the estimated abundances are influenced by the LiDAR point cloud density, particularly helpful in spectral mixtures involving partial occlusions and illumination changes caused by elevation differences. We demonstrate the advantages of the proposed algorithm with co-registered LiDAR-HSI data.
Proceedings of SPIE | 2011
Robert S. Rand; Roger N. Clark; K. Eric Livo
The Deepwater Horizon oil spill covered a very large geographical area in the Gulf of Mexico creating potentially serious environmental impacts on both marine life and the coastal shorelines. Knowing the oils areal extent and thickness as well as denoting different categories of the oils physical state is important for assessing these impacts. High spectral resolution data in hyperspectral imagery (HSI) sensors such as Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) provide a valuable source of information that can be used for analysis by semi-automatic methods for tracking an oil spills areal extent, oil thickness, and oil categories. However, the spectral behavior of oil in water is inherently a highly non-linear and variable phenomenon that changes depending on oil thickness and oil/water ratios. For certain oil thicknesses there are well-defined absorption features, whereas for very thin films sometimes there are almost no observable features. Feature-based imaging spectroscopy methods are particularly effective at classifying materials that exhibit specific well-defined spectral absorption features. Statistical methods are effective at classifying materials with spectra that exhibit a considerable amount of variability and that do not necessarily exhibit well-defined spectral absorption features. This study investigates feature-based and statistical methods for analyzing oil spills using hyperspectral imagery. The appropriate use of each approach is investigated and a combined feature-based and statistical method is proposed.
Optical Science and Technology, the SPIE 49th Annual Meeting | 2004
Edmundo Simental; Edward H. Bosch; Robert S. Rand
Advances in hyperspectral sensor technology increasingly provide higher resolution and higher quality data for the accurate generation of terrain categorization/classification (TERCAT) maps. The generation of TERCAT maps from hyperspectral imagery can be accomplished using a variety of spectral pattern analysis algorithms; however, the algorithms are sometimes complex, and the training of such algorithms can be tedious. Further, hyperspectral imagery contains a voluminous amount of data with contiguous spectral bands being highly correlated. These highly correlated bands tend to provide redundant information for classification/feature extraction computations. In this paper, we introduce the use of wavelets to generate a set of Generalized Difference Feature Index (GDFI) measures, which transforms a hyperspectral image cube into a derived set of GDFI bands. A commonly known special case of the proposed GDFI approach is the Normalized Difference Vegetation Index (NDVI) measure, which seeks to emphasize vegetation in a scene. Numerous other band-ratio measures that emphasize other specific ground features can be shown to be a special case of the proposed GDFI approach. Generating a set of GDFI bands is fast and simple. However, the number of possible bands is capacious and only a few of these “generalized ratios” will be useful. Judicious data mining of the large set of GDFI bands produces a small subset of GDFI bands designed to extract specific TERCAT features. We extract/classify several terrain features and we compare our results with the results of a more sophisticated neural network feature extraction routine.
Journal of Applied Remote Sensing | 2017
Robert S. Rand; Ronald G. Resmini; David W. Allen
Abstract. Linear mixtures of materials in a scene often occur because the resolution of a sensor is relatively coarse, resulting in pixels containing patches of different materials within them. This phenomenon causes nonoverlapping areal mixing and can be modeled by a linear mixture model. More complex phenomena, such as the multiple scattering in mixtures of vegetation, soils, granular, and microscopic materials within pixels can result in intimate mixing with varying degrees of nonlinear behavior. In such cases, a linear model is not sufficient. This study considers two approaches for unmixing pixels in a scene that may contain linear or intimate (nonlinear) mixtures. The first method is based on earlier studies that indicate nonlinear mixtures in reflectance space are approximately linear in albedo space. The method converts reflectance to single-scattering albedo according to Hapke theory and uses a constrained linear model on the computed albedo values. The second method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in nonlinear mixtures of materials. This study compares the two approaches and pays particular attention to these dependencies. Both laboratory and airborne collections of hyperspectral imagery are used to validate the methods.
applied imagery pattern recognition workshop | 2012
Timothy Khuon; Robert S. Rand; John B. Greer; Eric Truslow
A distributed architecture for adaptive sensor fusion (a multisensor fusion neural net) is introduced for 3D imagery data that makes use of a super-resolution technique computed with a Bregman-Iteration deconvolution algorithm. This architecture is a cascaded neural network, which consists of two levels of neural networks. The first level consists of sensor networks: two independent sensor neural nets, namely, a spatial neural net and spectral neural net. The second level is a fusion neural net, which contains a single neural net that combines the information from the sensor level. The inputs to the sensor networks are obtained from unsupervised spatial and spectral segmentation algorithms that can be applied to the original imagery or imagery enhanced by a proposed super-resolution process. Spatial segmentation is obtained by a mean-shift method and spectral segmentation is obtained by a Stochastic Expectation Maximization method. The decision outputs from the sensor nets are used to train the fusion net to a specific overall decision. The overall approach is tested with an experiment involving a multi-sensor airborne collection of LIDAR and Hyperspectral data over a university campus in Gulfport MS. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated. The final class map contains the geographical classes as well as the signature classes.
Proceedings of SPIE | 2012
Robert S. Rand; Timothy Khuon
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained (from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects. Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne data collection of LIDAR and HSI over the Gulfport, MS, area.
Proceedings of SPIE | 2010
Robert S. Rand
A neurodynamical approach to scene segmentation of hyperspectral imagery is investigated based on oscillatory correlation theory. A network of relaxation oscillators, which is based on the Locally Excitatory Globally Inhibitory Oscillator Network (LEGION), is extended to process multiband data and it is implemented to perform unsupervised scene segmentation using both spatial and spectral information. The nonlinear dynamical network is capable of achieving segmentation of objects in a scene by the synchronization of oscillators that receive local excitatory inputs from a collection of local neighbors and desynchronization between oscillators corresponding to different objects. The original LEGION model was designed for single-band imagery. The proposed multiband version of LEGION is implemented such that the connections in the oscillator network receive the spectral pixel vectors in the hyperspectral data as excitatory inputs. Euclidean distances between spectra in local neighborhoods are used as the measure of closeness in the network. The ability of the proposed approach to perform natural and urban scene segmentation for geospatial analysis is assessed. Our approach is tested on two hyperspectral datasets with notably different sensor properties and scene content.