Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel A. Cook is active.

Publication


Featured researches published by Daniel A. Cook.


ieee radar conference | 2012

Gigapixel spotlight synthetic aperture radar backprojection using clusters of GPUs and CUDA

Thomas M. Benson; Daniel P. Campbell; Daniel A. Cook

Synthetic aperture radar (SAR) image formation via backprojection offers a robust mechanism by which to form images on general, non-planar surfaces, without often restrictive assumptions regarding the planarity of the wavefront at the locations being imaged. However, backprojection presents a substantially increased computational load relative to other image formation algorithms that typically depend upon fast Fourier transforms. In this paper, we present an image formation framework for accelerated SAR backprojection that utilizes a cluster of computing nodes, each with one or more graphics processing units (GPUs). We address the parallelization of the backprojection process among multiple nodes and the scalability thereby obtained, several optimization approaches, and performance as a function of both allocated resources and desired precision. Finally, we demonstrate the achieved performance on a simulated gigapixel-scale data set.


Journal of the Acoustical Society of America | 2002

Motion compensation technique for wide beam synthetic aperture sonar

Jose E. Fernandez; Daniel A. Cook; James T. Christoff

Optimal performance of synthetic aperture sonar (SAS) systems requires accurate motion and medium compensation. Any uncorrected deviations from those assumed during the SAS beam formation process can degrade the beam pattern of the SA in various ways (broadening and distortion of the main lobe, increased side lobes and grating lobes levels, etc.). These would manifest in the imagery in the form of degraded resolution, blurring, target ghosts, etc. An accurate technique capable of estimating motion and medium fluctuations has been developed. The concept is to adaptively track a small patch on the sea bottom, which is in the order of a resolution cell, by steering the SAS beam as the platform moves in its trajectory. Any path length differences to that patch (other than the quadratic function product of the steering process) will be due to relative displacements caused by motion and/or medium fluctuations and can be detected by cross‐correlation methods. This technique has advantages over other data driven ...


Journal of the Acoustical Society of America | 2004

AUV‐based synthetic aperture sonar: Initial experiences and insights

Daniel A. Cook; Jose E. Fernandez; John S. Stroud; Kerry W. Commander; Anthony D. Matthews

The ability to do synthetic aperture sonar (SAS) imaging from autonomous underwater vehicles (AUVs) has only recently been achieved. The combination of the two technologies is a milestone in the field of underwater sensing as the combination of high‐resolution SAS imaging with AUVs will provide military, research, and commercial users with systems of unprecedented performance and capabilities. The U.S. Navy took delivery of the first AUV‐based SAS in early 2003. A description of the system will be presented along with the methodologies employed. The emphasis will be on image quality and repeatability, as well as the differences associated with operating an AUV‐based SAS as opposed to a towed SAS. Additional topics will include general comments and recommendations for better AUV/SAS integration such as mission planning and vehicle control strategies intended to maximize the chances of high‐quality imagery, SAS motion measurement requirements coupled with on‐board vehicle navigation, and the potential of using the SAS data to augment the vehicle motion sensors. Lastly, a brief overview of forthcoming Navy SAS systems will be included.


Journal of the Acoustical Society of America | 2018

Characterization of internal waves in synthetic aperture sonar imagery via ray tracing

David J. Pate; Daniel A. Cook; Anthony P. Lyons; Roy Edgar Hansen

Synthetic aperture sonar imagery often captures features that appear similar to sand waves but are actually pockets of denser water traveling as isolated waves along the seafloor. These pockets of cold water refract acoustic waves like a lens, causing intensity peaks and shadows that resemble medium to large scale sand waves. This work uses dynamic ray tracing to predict the intensity return as affected by refraction. First, we explore the nature of the intensity pattern created by internal waves of various shapes and sizes. Then, we use an optimization-based approach to solve the inverse problem: given an intensity pattern, determine the size, shape, and location of the internal wave that created it.


Journal of the Acoustical Society of America | 2018

Synthetic aperture sonar speckle noise reduction performance evaluation

Marsal A. Bruna; David J. Pate; Daniel A. Cook

SAS (Synthetic Aperture Sonar) imagery always contains speckle, which is often thought of as a kind of multiplicative noise with respect to the underlying scene reflectivity. Speckle arises from the coherent interference of waves backscattered by rough surfaces within a resolution cell. Numerous image processing algorithms have been proposed to reduce speckle while preserving image features. Frequently, these algorithms are evaluated using actual SAS imagery; where the underlying noise-free image is unknown. The lack of noise-free images along with a limited number of images can lead to an incomplete estimation of algorithm performance. The use of various image quality and speckle reduction metrics is also required to accurately assess performance. In this paper, a unified framework using both simulated and real imagery is used to analyze the performance of prominent algorithms, such as multilook and anisotropic diffusion methods.


Journal of the Acoustical Society of America | 2018

Representation trade-offs for the quality assessment of acoustic color signatures

J. D. Park; Daniel A. Cook; Alan J. Hunter

Acoustic color is a representation of the spectral response over aspect, typically in 2-D. This representation aims to characterize the structural acoustic phenomena associated with an object with frequency and aspect as the two axes of choice. However, the strongest acoustic color signatures are typically the geometric features of an object due to geometric scattering. Naturally, these geometric features are well-represented as straight-line signatures in the spatial spectrum representation with the axes horizontal and vertical wavenumbers. Elastic responses due to other scattering mechanisms such as Rayleigh scattering or Mie scattering are typically smaller in magnitude, and are not easily recognizable or separable from the stronger responses. This work explores variants of acoustic color, to find intuitive representations for these types of scattering responses, and to assess their signature quality.Acoustic color is a representation of the spectral response over aspect, typically in 2-D. This representation aims to characterize the structural acoustic phenomena associated with an object with frequency and aspect as the two axes of choice. However, the strongest acoustic color signatures are typically the geometric features of an object due to geometric scattering. Naturally, these geometric features are well-represented as straight-line signatures in the spatial spectrum representation with the axes horizontal and vertical wavenumbers. Elastic responses due to other scattering mechanisms such as Rayleigh scattering or Mie scattering are typically smaller in magnitude, and are not easily recognizable or separable from the stronger responses. This work explores variants of acoustic color, to find intuitive representations for these types of scattering responses, and to assess their signature quality.


IEEE Journal of Oceanic Engineering | 2018

Synthetic Aperture Sonar Image Contrast Prediction

Daniel A. Cook; Daniel C. Brown

Sidescan imaging sonar performance is often described using metrics such as resolution, maximum range, and area coverage rate. These attributes focus on describing how finely and over what area a sensor can make imagery of the seafloor and targets. While these metrics are important, they are inadequate for fully characterizing the quality of sonar imagery. Shadow regions often carry as much, and sometimes even more, information than the direct return from a target. The shadow contrast within an image is therefore a critical property, and it is the result of a complex interaction between the sonar hardware, the environment, and the signal processing. This paper builds on key results from the synthetic aperture radar and sonar literature to develop a comprehensive, quantitative model for predicting the shadow contrast ratio. A model for the contrast ratio is constructed using an approach that is similar to the development of the sonar equation. This ratio describes the average relative level between the seafloor and a shadow. The model includes the effects of the transducer and array design, ambient noise, quantization noise, unwanted volume and surface reverberation, and the signal processing used to construct the synthetic aperture sonar (SAS) imagery. The shadow contrast predicted by the model is compared to the contrast measured from SAS imagery, where close agreement is observed. Examples are presented where the model is applied to a hypothetical high-frequency imaging SAS sensor in a typical operating environment. The shadow contrast ratio is estimated as a function of the sensor range and key parameters of interest, such as receiver channel spacing, sediment backscatter coefficient, and the time-bandwidth product of the transmitted signal. At long ranges, the contrast ratio is limited by additive noise interference, while at near ranges the effect of the receiver channel spacing dominates the contrast ratio through along-track ambiguities. The use of this tool for analysis and prediction of image quality has significant implications for system design, mission planning, and data processing.


Journal of the Acoustical Society of America | 2017

Synthetic aperture sonar calibration using active transponders

Brian O'Donnell; Daniel A. Cook

There is an emerging desire within the synthetic aperture sonar community to establish the theory and practical means for calibrating imagery, such that each pixel value represents an actual backscatter coefficient. Image calibration has long been standard practice for remote sensing synthetic aperture radar. However, there are a number of challenges that make calibration of sonar imagery more difficult. Among these is the fact that good inexpensive passive targets (like corner reflectors used in radar) do not have an exact counterpart in underwater acoustics. Passive calibration targets for imaging sonar, such as solid spheres or specially-designed focusing spheres, are used in practice, but active transponders offer far more capability and flexibility. The advantages and applications of active transponders are discussed, along with the current development status of a project at GTRI to design and build an affordable and recoverable calibration device.


Journal of the Acoustical Society of America | 2017

Synthetic aperture sonar contrast prediction and estimation

Daniel A. Cook; Daniel C. Brown

Sidescan imaging sonar performance is often described using metrics such as resolution, maximum range, and area coverage rate. These attributes alone cannot completely characterize image quality. Shadow regions often carry as much, and sometimes even more, information than the direct return from a target. The shadow contrast ratio is an important metric of image quality, and it includes contributions from the sonar hardware, the environment, and the signal processing. This presentation provides an overview of a comprehensive quantitative model for predicting the contrast ratio. This analysis of image quality has significant implications for system design, mission planning, and data processing. Key design and processing tradeoffs are highlighted using design parameters typical for high-frequency synthetic aperture sonar applications.


Journal of the Acoustical Society of America | 2017

Signal processing trade-offs for the quality assessment of acoustic color signatures

Brett E. Bissinger; J. D. Park; Daniel A. Cook; Alan J. Hunter

Acoustic color is a representation of the spectral response over aspect, typically in 2-D. The two natural axes for this representation are frequency and the aspect to the object. It is intuitive to assume finer resolution in the two dimensions would lead to more information extractable for improved quality. However, with conventional linear track data collection methods, there is an inherent trade-off between signal processing decisions and the amount of information that can be utilized without loss of quality. In this work, how the information distribution is affected with choices of representation domain will be presented. The quality metrics including resolution, signal-to-noise-ratio, and other metrics will be discussed in the context of various signal processing choices and parameters. Other representation approaches as extensions of acoustic color will also be explored, such as time-evolving acoustic color that shows how the spectral response changes within a ping cycle.

Collaboration


Dive into the Daniel A. Cook's collaboration.

Top Co-Authors

Avatar

Jose E. Fernandez

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel C. Brown

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

J. D. Park

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

John S. Stroud

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Kerry W. Commander

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Anthony D. Matthews

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Anthony P. Lyons

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Daniel P. Campbell

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

David J. Pate

Georgia Tech Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge