Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Grady Tuell is active.

Publication


Featured researches published by Grady Tuell.


international geoscience and remote sensing symposium | 2008

Seafloor and Land Cover Classification Through Airborne Lidar and Hyperspectral Data Fusion

Christopher Macon; Jennifer M. Wozencraft; Joong Yong Park; Grady Tuell

In 2007 the Joint Airborne Lidar Bathymetry Technical Center of Expertise (JALBTCX) collected concurrent bathymetric lidar and hyperspectral imagery in Hilo Bay, Hawaii. The data were collected using the Compact Hydrographic Airborne Rapid Total Survey (CHARTS) system. CHARTS is JALBTCX in-house survey capability that includes a SHOALS-3000 lidar instrument integrated with a CASI-1500 hyperspectral imager. CHARTS collects either 20-kHz topographic lidar data and 3-kHz bathymetric lidar data, each concurrent with digital RGB and hyperspectral imagery. Optech Internationals Rapid Environmental Assessment (REA) Processor is designed to integrate the bathymetric lidar and hyperspectral data streams, creating a product suite that includes maps of water depth, bottom reflectance, water column volume reflectance, a+bb (a measure of water column attenuation) derived from the bathymetric lidar data, spectral color-balanced mosaics of seafloor reflectance, spectral water column parameters, and seafloor and landcover classifications This paper will demonstrate the capability of Optech REA on the production dataset from Hilo Bay.


Algorithms and technologies for multispectral, hyperspectral, and ultraspectral imagery. Conference | 2005

SHOALS-enabled 3D benthic mapping

Grady Tuell; Joong Yong Park; Jennifer Aitken; Vinod Ramnath; Viktor Feygels; Gary Guenther; Yuri Kopilevich

For the past two decades, hydrographic surveyors have used Optechs bathymetric laser technology to accurately measure water depths and to describe the geometry of the shallow-water seafloor. Recently, we have demonstrated the potential to produce bottom images from estimates of SHOALS-1000T green laser reflectance, and spatial variations in the optical properties of the water column by analyzing time-resolved waveforms. We have also performed the electronic and geometric integration of an imaging spectrometer into SHOALS, and have developed a first generation of software which provides for the exploitation of the combined laser and hyperspectral data within a fusion paradigm. In this paper, we discuss relevant sensor and data fusion issues, and present recent 3D benthic mapping results.


oceans conference | 2014

Using lidar waveforms to detect environmental hazards through visualization of the water column

Joong Yong Park; Vinod Ramnath; Grady Tuell

Airborne bathymetric lidar (Light Detection and Ranging) systems measure the optical path (range and angle) between the sea surface and the seafloor. Lidar achieves this measurement by firing a laser from an aircraft and calculating the range based on the speed of light and the time it takes for the laser pulse to return. Measurements of photoelectrons at the photocathode of a returned laser pulse are collected frequently, such as once every nanosecond. The collected measurements of a single pulse in a time series are called a waveform. The laser pulse return (reflection) from the seafloor is visible in the waveform as a pronounced “bump” above the volume backscatter. Similarly, any floating or submerged objects in the water can also be seen in the waveform. Along with a voxelized 3D image cube visualization of the lidar data in terms of the returned backscatter from the sea surface to the seafloor, the water surface and underwater environmental changes can also be detected based on waveform analysis of airborne bathymetric lidar data. Oil spills on the water surface and leaks from sewers or other kinds of underwater pipes can be detected as well by analyzing the change in the water column volume. The paper demonstrates the possibility of utilizing this unique three-dimensional water column visualization tool to detect environmental hazards and carry out rapid environmental assessments.


Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX | 2003

Fusion of hyperspectral and bathymetric laser data in Kaneohe Bay, Hawaii

Jennifer M. Wozencraft; Mark Lee; Grady Tuell; William D. Philpot

Passive, hyperspectral image data and bathymetric lidar data are complimentary data types that can be used effectively in tandem. Hyperspectral data contain information related to water quality, depth, and bottom type; and bathymetric lidar data contain precise information about the depth of the water and qualitative information about water quality and bottom reflectance. The two systems together provide constraints on each other. For example, lidar-derived depths can be used to constrain spectral radiative transfer models for hyperspectral data, which allows for the estimation of bottom reflectance for each pixel. Similarly, depths can be used to calibrate models, which permit the estimation of depths from the hyperspectral data cube on the raster defined by the spectral imagery. We demonstrate these capabilities by fusing hyperspectral data from the LASH and AVIRIS spectrometers with depth data from the SHOALS bathymetric laser to achieve bottom classification and increase the density of depth measurements in Kaneohe Bay, Hawaii. These capabilities are envisioned as operating modes of the next-generation SHOALS system, CHARTS, which will deploy a bathymetric laser and spectrometer on the same platform.


Applied Optics | 2014

Estimating field-of-view loss in bathymetric lidar: application to large-scale simulations

Domenic Carr; Grady Tuell

When designing a bathymetric lidar, it is important to study simulated waveforms for various combinations of system and environmental parameters. To predict a systems ranging accuracy, it is often necessary to analyze thousands of waveforms. In these large-scale simulations, estimating field-of-view loss is a challenge because the calculation is complex and computationally intensive. This paper describes a new procedure for quickly approximating this loss, and illustrates how it can be used to efficiently predict ranging accuracy.


Applied Optics | 2013

Correction for reflected sky radiance in low-altitude coastal hyperspectral images

Minsu Kim; Joong Yong Park; Yuri Kopilevich; Grady Tuell; William D. Philpot

Low-altitude coastal hyperspectral imagery is sensitive to reflections of sky radiance at the water surface. Even in the absence of sun glint, and for a calm water surface, the wide range of viewing angles may result in pronounced, low-frequency variations of the reflected sky radiance across the scan line depending on the solar position. The variation in reflected sky radiance can be obscured by strong high-spatial-frequency sun glint and at high altitude by path radiance. However, at low altitudes, the low-spatial-frequency sky radiance effect is frequently significant and is not removed effectively by the typical corrections for sun glint. The reflected sky radiance from the water surface observed by a low-altitude sensor can be modeled in the first approximation as the sum of multiple-scattered Rayleigh path radiance and the single-scattered direct-solar-beam radiance by the aerosol in the lower atmosphere. The path radiance from zenith to the half field of view (FOV) of a typical airborne spectroradiometer has relatively minimal variation and its reflected radiance to detector array results in a flat base. Therefore the along-track variation is mostly contributed by the forward single-scattered solar-beam radiance. The scattered solar-beam radiances arrive at the water surface with different incident angles. Thus the reflected radiance received at the detector array corresponds to a certain scattering angle, and its variation is most effectively parameterized using the downward scattering angle (DSA) of the solar beam. Computation of the DSA must account for the roll, pitch, and heading of the platform and the viewing geometry of the sensor along with the solar ephemeris. Once the DSA image is calculated, the near-infrared (NIR) radiance from selected water scan lines are compared, and a relationship between DSA and NIR radiance is derived. We then apply the relationship to the entire DSA image to create an NIR reference image. Using the NIR reference image and an atmospheric spectral reflectance look-up table, the low spatial frequency variation of the water surface-reflected atmospheric contribution is removed.


Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV | 2018

Landing zone identification for autonomous UAV applications using fused hyperspectral imagery and LIDAR point clouds

Sarah E. Lane; Ryan James; Domenic Carr; Grady Tuell

Multi-modal data fusion for situational awareness is of interest because fusion of data can provide more information than the individual modalities alone. However, many questions remain, including what data is beneficial, what algorithms work the best or are fastest, and where in the processing pipeline should data be fused? In this paper, we explore some of these questions through a processing pipeline designed for multi-modal data fusion in an autonomous UAV landing scenario. In this paper, we assess landing zone identification methods using two data modalities: hyperspectral imagery and LIDAR point clouds. Using hyperspectral image and LIDAR data from two datasets of Maui and a university campus, we assess the accuracies of different landing zone identification methods, compare rule-based and machine learning based classifications, and show that depending on the dataset, fusion does not always increase performance. However, we show that machine learning methods can be used to ascertain the usefulness of individual modalities and their resulting attributes when used to perform classification.


Proceedings of SPIE | 2016

Real-time, mixed-mode computing architecture for waveform-resolved lidar systems with total propagated uncertainty

Robert L. Ortman; Domenic Carr; Ryan James; Daniel Long; Matthew R. O'Shaughnessy; Christopher R. Valenta; Grady Tuell

We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a “mixed-mode” architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer’s user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.


Proceedings of SPIE | 2015

STAC: a comprehensive sensor fusion model for scene characterization

Alan R. Wagner; Chris Kennedy; Jason Zutty; Grady Tuell

We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.


international geoscience and remote sensing symposium | 2014

Qualification testing of the Agilent M9703A digitizer for use in a bathymetric LiDAR

Grady Tuell; Ryan James; Robert L. Ortman; Christopher R. Valenta; Domenic Carr; Jason Zutty

Waveform-resolved, bathymetric lidar applications demand high performance analog-to-digital converters (ADCs) to meet the demands of the high bandwidth, large dynamic range signals which are typical for coastal mapping applications. In this paper, the Agilent M9703A digitizer is evaluated with this criteria in mind and found to be an excellent candidate. Furthermore, the customizable FPGA firmware onboard the M9703A has been used along with a GPU to demonstrate the real-time computation of a lidar coordinate point cloud - significantly reducing the computation time required for current state-of-the-art systems from hours to seconds.

Collaboration


Dive into the Grady Tuell's collaboration.

Top Co-Authors

Avatar

Domenic Carr

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ryan James

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Christopher R. Valenta

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason Zutty

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jennifer M. Wozencraft

United States Army Corps of Engineers

View shared research outputs
Top Co-Authors

Avatar

Robert L. Ortman

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan R. Wagner

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Chris Kennedy

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Daniel Long

Georgia Tech Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge