Edward H. Bosch
National Geospatial-Intelligence Agency
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward H. Bosch.
IEEE Transactions on Geoscience and Remote Sensing | 2011
Alexey Castrodad; Zhengming Xing; John B. Greer; Edward H. Bosch; Lawrence Carin; Guillermo Sapiro
A method is presented for subpixel modeling, mapping, and classification in hyperspectral imagery using learned block-structured discriminative dictionaries, where each block is adapted and optimized to represent a material in a compact and sparse manner. The spectral pixels are modeled by linear combinations of subspaces defined by the learned dictionary atoms, allowing for linear mixture analysis. This model provides flexibility in source representation and selection, thus accounting for spectral variability, small-magnitude errors, and noise. A spatial-spectral coherence regularizer in the optimization allows pixel classification to be influenced by similar neighbors. We extend the proposed approach for cases for which there is no knowledge of the materials in the scene, unsupervised classification, and provide experiments and comparisons with simulated and real data. We also present results when the data have been significantly undersampled and then reconstructed, still retaining high-performance classification, showing the potential role of compressive sensing and sparse modeling techniques in efficient acquisition/transmission missions for hyperspectral imagery.
IEEE Geoscience and Remote Sensing Letters | 2007
Anish Mohan; Guillermo Sapiro; Edward H. Bosch
The nonlinear dimensionality reduction and its effects on vector classification and segmentation of hyperspectral images are investigated in this letter. In particular, the way dimensionality reduction influences and helps classification and segmentation is studied. The proposed framework takes into account the nonlinear nature of high-dimensional hyperspectral images and projects onto a lower dimensional space via a novel spatially coherent locally linear embedding technique. The spatial coherence is introduced by comparing pixels based on their local surrounding structure in the image domain and not just on their individual values as classically done. This spatial coherence in the image domain across the multiple bands defines the high-dimensional local neighborhoods used for the dimensionality reduction. This spatial coherence concept is also extended to the segmentation and classification stages that follow the dimensionality reduction, introducing a modified vector angle distance. We present the underlying concepts of the proposed framework and experimental results showing the significant classification improvements
international conference on image processing | 2010
Alexey Castrodad; Zhengming Xing; John B. Greer; Edward H. Bosch; Lawrence Carin; Guillermo Sapiro
Recent advances in sparse modeling and dictionary learning for discriminative applications show high potential for numerous classification tasks. In this paper, we show that highly accurate material classification from hyperspectral imagery (HSI) can be obtained with these models, even when the data is reconstructed from a very small percentage of the original image samples. The proposed supervised HSI classification is performed using a measure that accounts for both reconstruction errors and sparsity levels for sparse representations based on class-dependent learned dictionaries. Combining the dictionaries learned for the different materials, a linear mixing model is derived for sub-pixel classification. Results with real hyperspectral data cubes are shown both for urban and non-urban terrain.
Spie Newsroom | 2013
Edward H. Bosch; Alexey Castrodad; John S. Cooper; Julia Dobrosotskaya; Wojciech Czaja
There are many models for analyzing and synthesizing 2D data at a variety of different scales and directions. Characterizing directional information in data is significant since it provides information on the location of key objects in the data. These locations in turn are used to describe where these objects are relative to each other. However, often these models only account for a given set of directions, and, furthermore, they tend to be computationally intensive. For example, the theoretical developments described in the large number of published articles on wavelet transforms provides for data exploitation along horizontal, vertical, and diagonal directions as well as multiple scales. However, several techniques have been devised to model and exploit 2D data along multiple directions.1 Other state-of-the-art methods in sparse directional representations include contourlets,2 curvelets,3 shearlets,4 and multidirectional wavelets.5 These methods have been extensively analyzed, and their success in providing near optimal geometric decompositions of signals has been established.2–5 As mentioned, these models are often difficult to implement, and their high-dimensional analogs are still not well understood. We propose a computationally efficient model for analyzing 1D and 2D multiscale and multidirectional data. The model is based on a mathematical concept known as tight frames. Redundancy is a desirable property for many applications, including signal denoising,6 classification,7 sparse representations of signals,8, 9 and compressive sensing.10 It is in relation to the last two applications that we see the biggest potential of frames in creating representation models that can be optimized for a specific application. The proposed models are an extension of earlier reported research.11, 12 That work demonstrated that by using the proper choice of functions, the models can remove directional smooth content in 2D data and at the same time Figure 1. Radial lines varying in intensity and originating from the origin at 0, 14.03, 26.51, 36.86, 45, 53.13, 63.43, 75.96 and 90.
Optical Science and Technology, the SPIE 49th Annual Meeting | 2004
Edmundo Simental; Edward H. Bosch; Robert S. Rand
Advances in hyperspectral sensor technology increasingly provide higher resolution and higher quality data for the accurate generation of terrain categorization/classification (TERCAT) maps. The generation of TERCAT maps from hyperspectral imagery can be accomplished using a variety of spectral pattern analysis algorithms; however, the algorithms are sometimes complex, and the training of such algorithms can be tedious. Further, hyperspectral imagery contains a voluminous amount of data with contiguous spectral bands being highly correlated. These highly correlated bands tend to provide redundant information for classification/feature extraction computations. In this paper, we introduce the use of wavelets to generate a set of Generalized Difference Feature Index (GDFI) measures, which transforms a hyperspectral image cube into a derived set of GDFI bands. A commonly known special case of the proposed GDFI approach is the Normalized Difference Vegetation Index (NDVI) measure, which seeks to emphasize vegetation in a scene. Numerous other band-ratio measures that emphasize other specific ground features can be shown to be a special case of the proposed GDFI approach. Generating a set of GDFI bands is fast and simple. However, the number of possible bands is capacious and only a few of these “generalized ratios” will be useful. Judicious data mining of the large set of GDFI bands produces a small subset of GDFI bands designed to extract specific TERCAT features. We extract/classify several terrain features and we compare our results with the results of a more sophisticated neural network feature extraction routine.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII | 2007
Alexey Castrodad; Edward H. Bosch; Ronald G. Resmini
Several studies have reported that the use of derived spectral features, in addition to the original hyperspectral data, can facilitate the separation of similar classes. Linear and nonlinear transformations are employed to project data into mathematical spaces with the expectation that the decision surfaces separating similar classes become well defined. Therefore, the problem of discerning similar classes in expanded space becomes more tractable. Recent work presented by one of the authors discusses a dimension expansion technique based on generating real and imaginary complex features from the original hyperspectral signatures. A complex spectral angle mapper was employed to classify the data. In this paper, we extend this method to include other approaches that generate derivative-like and wavelet-based spectral features from the original data. These methods were tested with several supervised classification methods with two Hyperspectral Image (HSI) cubes.
Proceedings of SPIE | 2013
Edward H. Bosch; Alexey Castrodad; John S. Cooper; Wojtek Czaja; Julia Dobrosotskaya
We propose a framework for analyzing and visualizing data at multiple scales and directions by constructing a novel class of tight frames. We describe an elegant way of creating 2D tight frames from 1D sets of orthonormal vectors and show how to exploit the representation redundancy in a computationally efficient manner. Finally, we employ this framework to perform image superresolution via edge detection and characterization.
Archive | 2013
Christopher J. Deloye; J. Christopher Flake; David S. Kittle; Edward H. Bosch; Robert S. Rand; David J. Brady
The coded aperture snapshot spectral imager (CASSI) systems are a class of imaging spectrometers that provide a first-generation implementation of compressive sensing themes to the domain of hyperspectral imaging. Via multiplexing of information from different spectral bands originating from different spatial locations, a CASSI system undersamples the three-dimensional spatial/spectral data cube of a scene. Reconstruction methods are then used to recover an estimate of the full data cube. Here we report on our characterization of a CASSI system’s performance in terms of post-reconstruction image quality and the suitability of using the resulting data cubes for typical hyperspectral data exploitation tasks (e.g., material detection, pixel classification). The data acquisition and reconstruction process does indeed introduce trade-offs in terms of achieved image quality and the introduction of spurious spectral correlations versus data acquisition speedup and the potential for reduced data volume. The reconstructed data cubes are of sufficient quality to perform reasonably accurate pixel classification. Potential avenues to improve upon the usefulness of CASSI systems for hyperspectral data acquisition and exploitation are suggested.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery X | 2004
Robert S. Rand; Edward H. Bosch
The effect of using Adaptive Wavelets is investigated for dimension reduction and noise filtering of hyperspectral imagery that is to be subsequently exploited for classification or subpixel analysis. The method is investigated as a possible alternative to the Minimum Noise Fraction (MNF) transform as a preprocessing tool. Unlike the MNF method, the wavelet-transformed method does not require an estimate of the noise covariance matrix that can often be difficult to obtain for complex scenes (such as urban scenes). Another desirable characteristic of the proposed wavelet transformed data is that, unlike Principal Component Analysis (PCA) transformed data, it maintains the same spectral shapes as the original data (the spectra are simply smoothed). In the experiment, an adaptive wavelet image cube is generated using four orthogonal conditions and three vanishing moment conditions. The classification performance of a Derivative Distance Squared (DDS) classifier and a Multilayer Feedforward Network (MLFN) neural network classifier applied to the wavelet cubes is then observed. The performance of the Constrained Energy Minimization (CEM) matched-filter algorithm applied to this data us also observed. HYDICE 210-band imagery containing a moderate amount of noise is used for the analysis so that the noise-filtering properties of the transform can be emphasized. Trials are conducted on a challenging scene with significant locally varying statistics that contains a diverse range of terrain features. The proposed wavelet approach can be automated to require no input from the user.
Proceedings of SPIE | 1998
Henry Berger; Edward H. Bosch; Edmundo Simental
This paper discusses the nonuniform illumination of individual pixels in an array that is intrinsic to the scene viewed, as opposed to turbulence or platform motion as an error source in quantitative imagery. It describes two classes of algorithms to treat this type of problem. It points out that this problem can be viewed as a type of inverse problem with a corresponding integral equation unlike those commonly treated in the literature. One class allows estimation of the spatial variation of radiance within pixels using the single digital number irradiances produced by the measurements of the detectors within their instantaneous-fields-of-view (IFOVs). Usually it is assumed without discussion that the intrapixel radiance distribution is constant. Results are presented showing the improvements obtained by the methods discussed.