Shawn Hunt
University of Puerto Rico at Mayagüez
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shawn Hunt.
Remote Sensing | 2006
James A. Goodman; Miguel Velez-Reyes; Shawn Hunt; Roy A. Armstrong
Remote sensing is increasingly being used as a tool to quantitatively assess the location, distribution and relative health of coral reefs and other shallow aquatic ecosystems. As the use of this technology continues to grow and the analysis products become more sophisticated, there is an increasing need for comprehensive ground truth data as a means to assess the algorithms being developed. The University of Puerto Rico at Mayaguez (UPRM), one of the core partners in the NSF sponsored Center for Subsurface Sensing and Imaging Systems (CenSSIS), is addressing this need through the development of a fully-characterized field test environment on Enrique Reef in southwestern Puerto Rico. This reef area contains a mixture of benthic habitats, including areas of seagrass, sand, algae and coral, and a range of water depths, from a shallow reef flat to a steeply sloping forereef. The objective behind the test environment is to collect multiple levels of image, field and laboratory data with which to validate physical models, inversion algorithms, feature extraction tools and classification methods for subsurface aquatic sensing. Data collected from Enrique Reef currently includes airborne, satellite and field-level hyperspectral and multispectral images, in situ spectral signatures, water bio-optical properties and information on habitat composition and benthic cover. We present a summary of the latest results from Enrique Reef, discuss our concept of an open testbed for the remote sensing community and solicit other users to utilize the data and participate in ongoing system development.
Proceedings of SPIE | 2001
Shawn Hunt; Miguel Velez-Reyes
Lossless compression algorithms typically do not use spectral prediction, and typical algorithms that do, use only one adjacent band. Using one adjacent band has the disadvantage that if the last band compressed is needed, all previous bands must be decompressed. One way to avoid this is to use a few selected bands to predict the others. Exhaustive searches for band selection have a combinatorial problem, and are therefore not possible except in the simplest cases. To counter this, the use of a fast approximate method for band selection is proposed. The bands selected by this algorithm are a reasonable approximation to the principal components. Results are presented for exhaustive studies using entropy measures, sum of squared errors, and compared to the fast algorithm for simple cases. Also, it was found that using six bands selected by the fast algorithm produces comparable performance to one adjacent band.
Proceedings of SPIE | 2013
John Lunzer; Shawn Hunt
NASA’s EO-1 satellite, well into it’s second decade of operation, continues to provide multispectral and hyperspectral data to the remote sensing community. The Hyperion pushbroom type hyperspectral spectrometer aboard EO-1 can be a rich and useful source of high temporal resolution hyperspectral data. Unfortunately the Hyperion sensor suffers from several issues including a low signal to noise ratio in many band regions as well as imaging artifacts. One artifact is the presence of vertical striping, which, if uncorrected, limits the value of the Hyperion imagery. The detector array reads in all spectral bands one spatial dimension (cross-track) at a time. The second spatial dimension (in-track) arises from the motion of the satellite. The striping is caused by calibration errors in the detector array that appear as a vertical striping pattern in the in-track direction. Because of the layout of the sensor array each spectral band exhibits it’s own characteristic striping pattern, each of which must be corrected independently. Many current Hyperion destriping algorithms focus on the correction of stripes by analyzing the column means and standard deviations of each band. The more effective algorithms utilize windowing of the column means and interband correlation of these window means. The approach taken in this paper achieves greater accuracy and effectiveness due to not only using local windowing in the across track dimension but also along the in‐track. This allows detection of the striping patterns in radiometrically homogeneous areas, providing improved detection accuracy.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII | 2007
Samuel Rosario-Torres; Miguel Velez-Reyes; Shawn Hunt; Luis O. Jimenez
The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery. The purpose the Toolbox is to provide a suite of information extraction algorithms to users of hyperspectral and multispectral imagery. HIAT has been developed as part of the NSF Center for Subsurface Sensing and Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to feature extraction/selection, supervised and unsupervised classification algorithms, unmixing and visualization developed at Laboratory of Remote Sensing and Image Processing (LARSIP). This paper presents an overview of the tools, application available in HIAT using as example an AVIRIS image. In addition, we present the new HIAT developments, unmixing, new oversampling algorithm, true color visualization, crop tool and GUI enhancement.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX | 2003
Shawn Hunt; Heidy Sierra
This paper investigates if and how oversampling techniques can be applied in a useful manner to hyperspectral images. Oversampling occurs when the signal is sampled higher than the Nyquist frequency. If this occurs, the higher sampling rate can be traded for precision. Specifically, one bit of precision can be gained if the signal has been oversampled by a factor of four. This paper first investigates if spectral oversampling actually occurs in hyperspectral images, then looks at its usefulness in classification. Simulations were done with synthetic and real images. The results indicate that oversampling does occur for many real objects, so a knowledge of what is being searched for is crucial for knowing if oversampling techniques can be used. The classification results indicate that it takes a relatively large amount of noise for these techniques to have a significant impact on classification with synthetic images. For real images however, an improvement in classification for both supervised and unsupervised algorithms was observed for all simulations.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Shawn Hunt; Gary Witus; Darin Ellis
This paper describes progress toward robotic arm system to collect swab samples for trace chemical analysis. Collecting a swab sample requires bringing the swab in contact with the target object, maintaining adequate pressure against the object while dragging the swab tip across the surface, conforming to the surface compliance, curvature and irregularities. It also requires detecting when the swipe motion is blocked, when it has reached the end or edge of the object, and when the normal swipe excursion has been completed. Remote or robotic swab sample collection is complicated by the fact that key physical properties of the target object, e.g., its location, surface contour, compliance, friction, are unknown or, at best, uncertain. We are pursuing a two-fold approach to computer assisted robotic examination swab sampling. We are developing a force-feedback master-slave puppet arm control system in which the operator manipulates and receives force feedback through a scale model of the remote arm. We are also developing adaptive motion behaviors for autonomous swab sample collection, in which the arm feels its way over the surface, adjusting its configuration to conform to the surface contour, while maintaining pressure and keeping the swab tip in the desired orientation with respect to the surface as it drags the swab across the target object. Experiments with the master-slave system will provide data on human operator adaptive motion behaviors, and provide a baseline for evaluation of the automatic system. This paper describes the force-feedback master-slave puppet arm control system, presents example teleoperated swab dynamics data, describes the emerging framework for analysis of adaptive motion behaviors in swab sample collection, and describes our approach to autonomous swab sampling adaptive behavior and control.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Gary Witus; Shawn Hunt
The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.
Algorithms and technologies for multispectral, hyperspectral, and ultraspectral imagery. Conference | 2005
Shirley Morillo-Contreras; Miguel Velez-Reyes; Shawn Hunt
A particular challenge in hyperspectral remote sensing of benthic habitats is that the signal exiting from the water is a small component of the overall signal received at the satellite or airborne sensor. Therefore, in order to be able to discriminate different ecological areas in benthic habitats, it is important to have a high signal to noise ratio (SNR). The SNR can be improved by building better sensors; SNR improvements however, we believe, are also achievable by means of signal processing and by taking advantage of the unique characteristics of hyperspectral sensors. One approach for SNR improvement is based on signal oversampling. Another approach for SNR improvement is Reduced Rank Filtering (RRF) where the small Singular Values of the image are discarded and then reconstruct a lower rank approximation to the original image. This paper presents a comparison in the use of oversampling filtering (OF) versus RRF as SNR enhancement methods in terms of classification accuracy and class separability when used as a pre-processing step in a classification system. Overall results show that OF does a better job improving the classification accuracy than RRF and at much lower computational cost, making it an attractive technique for Hyperspectral Image Processing.
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery X | 2004
Shawn Hunt; Jaime Laracuente
The spectrum of most objects in a hyperspectral image is oversampled in the spectral dimension due to the images having many closely spaced spectral samples. This oversampling implies that there is redundant information in the image which can be exploited to reduce the noise, and so increase the correct classification percentage. Oversampling techniques have been shown to be useful in the classification of hyperspectral imagery. These previous techniques consist of a lowpass filter in the spectral dimension whose characteristics are chosen based on the average spectral density of many objects to be classified. A better way of selecting the characteristics of the filter is to calculate the spectral density and oversampling of each object, and use that to determine the filter. The algorithm proposed here exploits the fact that the system is supervised to determine the oversampling rate, using the training samples for this purpose. The oversampling rate is used to determine the cutoff frequency for each class, and the highest of these is used to filter the whole image. Two pass approaches, where each class in the image is filtered with its own filter, were studied, but the increase in performance did not justify the increase in computational load. The results of applying these techniques to data to be classified are presented. The results, using AVIRIS imagery, show a significant improvement in classification performance.
Proceedings of SPIE | 2014
Nicole M. Rodríguez-Carrión; Shawn Hunt; Miguel A. Goenaga-Jimenez; Miguel Vélez-Reyez
This work describes a novel method of estimating statistically optimum pixel sizes for classification. Historically more resolution, smaller pixel sizes, are considered better, but having smaller pixels can cause difficulties in classification. If the pixel size is too small, then the variation in pixels belonging to the same class could be very large. This work studies the variance of the pixels for different pixel sizes to try and answer the question of how small, (or how large) can the pixel size be and still have good algorithm performance. Optimum pixel size is defined here as the size when pixels from the same class statistically come from the same distribution. The work first derives ideal results, then compares this to real data. The real hyperspectral data comes from a SOC-700 stand mounted hyperspectral camera. The results compare the theoretical derivations to variances calculated with real data in order to estimate different optimal pixel sizes, and show a good correlation between real and ideal data.