Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John P. Kerekes is active.

Publication


Featured researches published by John P. Kerekes.


international geoscience and remote sensing symposium | 2008

Development of a Web-Based Application to Evaluate Target Finding Algorithms

David K. Snyder; John P. Kerekes; Ian Fairweather; Robert Crabtree; Jeremy Shive; Stacey Hager

Hyperspectral imagery leverages the use of spectral measurements to detect objects of interest in a scene that may not be discernable from their spatial patterns alone. This paper describes a project to make available to the community a set of hyperspectral airborne images on which anyone can run a target detection algorithm to find selected objects of interest in an image. The image and target spectral signatures are publicly available but the pixel location of targets within the image is withheld to allow for independent algorithm evaluation and scoring. To distribute these data, a website has been created which allows users to download the hyperspectral data and upload their target detection results which are then automatically scored. The Target Detection Blind Test website is located at http://dirs.cis.rit.edu/blindtest/.


IEEE Geoscience and Remote Sensing Letters | 2008

Receiver Operating Characteristic Curve Confidence Intervals and Regions

John P. Kerekes

Many researchers have presented results showing the empirical performance of target detection algorithms using hyperspectral or synthetic aperture radar imagery. In nearly all cases, these probabilities of detection and false alarm are presented as precise values as opposed to their true nature as estimates of random values. In this letter, we provide analytical tools and examples of computing confidence intervals and regions around these estimates commonly presented as points on receiver operating characteristic (ROC) curves. It is suggested that these tools be adopted by researchers when presenting their results to provide their audience with a quantitative metric for proper interpretation of empirically estimated ROC curves.


Algorithms for multispectral, hyperspectral, and ultraspectral imagery. Conference | 2000

Algorithm taxonomy for hyperspectral unmixing

Nirmal Keshava; John P. Kerekes; Dimitris G. Manolakis; Gary A. Shaw

In this paper, we introduce a set of taxonomies that hierarchically organize and specify algorithms associated with hyperspectral unmixing. Our motivation is to collectively organize and relate algorithms in order to assess the current state-of-the-art in the field and to facilitate objective comparisons between methods. The hyperspectral sensing community is populated by investigators with disparate scientific backgrounds and, speaking in their respective languages, efforts in spectral unmixing developed within disparate communities have inevitably led to duplication. We hope our analysis removes this ambiguity and redundancy by using a standard vocabulary, and that the presentation we provide clearly summarizes what has and has not been done. As we shall see, the framework for the taxonomies derives its organization from the fundamental, philosophical assumptions imposed on the problem, rather than the common calculations they perform, or the similar outputs they might yield.


IEEE Transactions on Geoscience and Remote Sensing | 2009

A Method for Assessing Spectral Image Utility

Marcus S. Stefanou; John P. Kerekes

The utility of an image is an attribute that describes the ability of that image to satisfy performance requirements for a particular application. This paper establishes the context for spectral image utility by first reviewing traditional approaches to assessing panchromatic image utility and then discussing differences for spectral imagery. We define spectral image utility for the subpixel target detection application as the area under the receiver operating curve summarized across a range of target detection scenario parameters. We propose a new approach to assessing the utility of any spectral image for any target type and size and detection algorithm. Using six airborne hyperspectral images, we demonstrate the sensitivity of the assessed image utility to various target detection scenario parameters and show the flexibility of this approach as a tool to answer specific user information requirements. The results of this investigation lead to a better understanding of spectral image information vis-a-vis target detection performance and provide a step toward quantifying the ability of a spectral image to satisfy information exploitation requirements.


international conference on multimedia information networking and security | 1999

Compact active hyperspectral imaging system for the detection of concealed targets

Bernadette Johnson; Rose M. Joseph; Melissa L. Nischan; Amy B. Newbury; John P. Kerekes; Herbert T. Barclay; Berton C. Willard; John J. Zayhowski

We have recently conducted a series of laboratory and field test to demonstrate the utility of combining active illumination with hyperspectral imaging for the detection of concealed targets in natural terrain. The active illuminator, developed at MIT Lincoln Laboratory, is a novel microlaser-pumped fiber Raman source that provides high- brightness, subnanosecond-pulse-length output spanning the visible through near-IR spectral range. The hyperspectral- imaging system is comprised of a compact, grating-based spectrometer that uses a gateable, intensified CCD array as the detector element. The illuminator and hyperspectral imaging system are mounted on a small platform that is itself mounted on a tripod and scanned in azimuth to build an image scene of up to several hundred spectral bands. The system has been deployed under a variety of environmental conditions, including night-time illumination, and on a variety of target scenes, including exposed and concealed plastic and metallic mine-like targets. Targets have been detected and identified on the basis of spectral reflectance, fluorescence signatures, degree of polarization, and range-to-target information. The combination of laser-like broadband illumination and hyperspectral imaging offers great promise in concealed or obscured target detection. On-going developments include the incorporation of broadband illuminators in the 1 to 2 micrometers and 3 to 5 micrometers spectral bands, with corresponding increases in spectral coverage of the imaging and detection systems.


Proceedings of SPIE | 2001

Statistics of hyperspectral imaging data

Dimitris G. Manolakis; David Marden; John P. Kerekes; Gary A. Shaw

Characterization of the joint (among wavebands) probability density function (PDF) of hyperspectral imaging (HSI) data is crucial for several applications, including the design of constant false alarm rate (CFAR) detectors and statistical classifiers. HSI data are vector (or equivalently multivariate) data in a vector space with dimension equal to the number of spectral bands. As a result, the scalar statistics utilized by many detection and classification algorithms depend upon the joint pdf of the data and the vector-to-scalar mapping defining the specific algorithm. For reasons of analytical tractability, the multivariate Gaussian assumption has dominated the development and evaluation of algorithms for detection and classification in HSI data, although it is widely recognized that it does not always provide an accurate model for the data. The purpose of this paper is to provide a detailed investigation of the joint and marginal distributional properties of HSI data. To this end, we assess how well the multivariate Gaussian pdf describes HSI data using univariate techniques for evaluating marginal normality, and techniques that use unidimensional views (projections) of multivariate data. We show that the class of elliptically contoured distributions, which includes the multivariate normal distribution as a special case, provides a better characterization of the data. Finally, it is demonstrated that the class of univariate stable random variables provides a better model for the heavy-tailed output distribution of the well known matched filter target detection algorithm.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Object Tracking Using High Resolution Satellite Imagery

Lingfei Meng; John P. Kerekes

High resolution multispectral satellite images with multi-angular look capability have tremendous potential applications. We present an object tracking algorithm that includes moving object estimation, target modeling, and target matching three-step processing. Potentially moving objects are first identified on the time-series images. The target is then modeled by extracting both spectral and spatial features. In the target matching procedure, the Bhattacharyya distance, histogram intersection, and pixel count similarity are combined in a novel regional operator design. Our algorithm has been tested using a set of multi-angular sequence images acquired by the WorldView-2 satellite. The tracking performance is analyzed by the calculation of recall, precision, and F1 score of the test. In this study, we have demonstrated the capability of object tracking in a complex environment with the help of high resolution multispectral satellite imagery.


Proceedings of SPIE | 2013

The SHARE 2012 data campaign

AnneMarie Giannandrea; Nina G. Raqueno; David W. Messinger; Jason Faulring; John P. Kerekes; Jan van Aardt; Kelly Canham; Shea Hagstrom; Erin Ontiveros; Aaron Gerace; Jason R. Kaufman; Karmon Vongsy; Heather Griffith; Brent D. Bartlett; Emmett J. Ientilucci; Joseph Meola; Lauwrence Scarff; Brian J. Daniel

A multi-modal (hyperspectral, multispectral, and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab (NRL), United Technologies Aerospace Systems (UTAS) and MITRE. The campaign was a follow on from the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) from 2010. Data was collected in support of the eleven simultaneous experiments described here. The airborne imagery was collected over four different sites with hyperspectral, multispectral, and LIDAR sensors. The sites for data collection included Avon, NY, Conesus Lake, Hemlock Lake and forest, and a nearby quarry. Experiments included topics such as target unmixing, subpixel detection, material identification, impacts of illumination on materials, forest health, and in-water target detection. An extensive ground truthing effort was conducted in addition to collection of the airborne imagery. The ultimate goal of the data collection campaign is to provide the remote sensing community with a shareable resource to support future research. This paper details the experiments conducted and the data that was collected during this campaign.


applied imagery pattern recognition workshop | 2007

3D Scene Reconstruction through a Fusion of Passive Video and Lidar Imagery

Prudhvi Gurram; Harvey E. Rhody; John P. Kerekes; Stephen R. Lach; Eli Saber

Geometric structure of a scene can be reconstructed using many methods. In recent years, two prominent approaches have been digital photogrammetric analysis using passive stereo imagery and feature extraction from lidar point clouds. In the first method, the traditional technique relies on finding common points in two or more 2D images that were acquired from different view perspectives. More recently, similar approaches have been proposed where stereo mosaics are built from aerial video using parallel ray interpolation, and surfaces are subsequently extracted from these mosaics using stereo geometry. Although the lidar data inherently contain 2.5 or 3 dimensional information, they also require processing to extract surfaces. In general, structure from stereo approaches work well when the scene surfaces are flat and have strong edges in the video frames. Lidar processing works well when the data is densely sampled. In this paper, we analyze and discuss the pros and cons of the two approaches. We also present three challenging situations that illustrate the benefits that could be derived from this data fusion: when one or more edges are not clearly visible in the video frames, when the lidar data sampling density is low, and when the object surface is not planar. Examples are provided from the processing of real airborne data gathered using a combination of lidar and passive imagery taken from separate aircraft platforms at different times.


SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1999

Analysis of HYDICE noise characteristics and their impact on subpixel object detection

Melissa L. Nischan; John P. Kerekes; Jerrold E. Baum; Robert W. Basedow

A number of organizations are using the data collected by the HYperspectral Digital Imagery Collection Experiment (HYDICE) airborne sensor to demonstrate the utility of hyperspectral imagery (HSI) for a variety of applications. The interpretation and extrapolation of these results can be influenced by the nature and magnitude of any artifacts introduced by the HYDICE sensor. A short study was undertaken which first reviewed the literature for discussions of the sensors noise characteristics and then extended those results with additional analyses of HYDICE data. These investigations used unprocessed image data from the onboard Flight Calibration Unit (FCU) lamp and ground scenes taken at three different sensor altitudes and sample integration times. Empirical estimates of the sensor signal-to-noise ratio (SNR) were compared to predictions from a radiometric performance model. The spectral band-to-band correlation structure of the sensor noise was studied. Using an end-to-end system performance model, the impact of various noise sources on subpixel detection was analyzed. The results show that, although a number of sensor artifacts exist, they have little impact on the interpretations of HSI utility derived from analyses of HYDICE data.

Collaboration


Dive into the John P. Kerekes's collaboration.

Top Co-Authors

Avatar

Emmett J. Ientilucci

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David W. Messinger

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael G. Gartley

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott D. Brown

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jerrold E. Baum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan van Aardt

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jared A. Herweg

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lingfei Meng

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Presnar

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge