Jay B. Jordan
New Mexico State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jay B. Jordan.
Enhanced and synthetic vision 2000. Conference | 2000
Wendell R. Watkins; David H. Tofsted; V. Grayson CuQlock-Knopp; Jay B. Jordan; John O. Merritt
Navigation, especially in aviation, has been plagued since its inception with the hazards of poor visibility conditions. Our ground vehicles and soldiers have difficulty moving at night or in low visibility even with night vision augmentation because of the lack of contrast and depth perception. Trying to land an aircraft in fog is more difficult yet, even with radar tracking. The visible and near-infrared spectral regions have been ignored because of the problem with backscattered radiation from landing light illumination similar to that experienced when using high beam headlights when driving in fog. This paper describes the experimentation related to the development of a visible/near-infrared active hyperstereo vision system for landing an aircraft in fog. Hyperstereo vision is a binocular system with baseline separation wider than the human interocular spacing. The basic concept is to compare the imagery obtained from alternate wings of the aircraft while illuminating only from the opposite wing. This produces images with a backscatter radiation pattern that has a decreasing gradient away from the side with the illumination source. Flipping the imagery from one wing left to right and comparing it to the opposite wing imagery allows the backscattered radiation pattern to be subtracted from both sets of imagery. The use of retro-reflectors along the sides of the runway allows the human stereo fusion process to fuse the forward scatter blurred hyperstereo imagery of the array of retro-reflectors while minimizing backscatter. The appropriate amount of inverse point spread function deblurring is applied for improved resolution of scene content to aid in detection of objects on the runway. The experimental system is described and preliminary results are presented to illustrate the concept.
international conference on acoustics, speech, and signal processing | 1984
Jay B. Jordan; Lonnie C. Ludeman
An algorithm is described which operates on a digitized television frame or digital infrared image to rapidly locate tightly clustered objects which occupy less than half the field of view and which can be enclosed by rectangles. The algorithm uses a maximum entropy image and projections in place of arbitrary heuristics to guide the location and segmentation process.
10th Meeting on Optical Engineering in Israel | 1997
Wendell R. Watkins; Jay B. Jordan; Mohan M. Trivedi
Recent stereo vision experiments show potential in enhancing vehicular navigation, target acquisition, and optical turbulence mitigation. The experiments involved the use of stereo vision headsets connected to visible and 8-12 micrometers IR imagers. The imagers were separated by up to 50 m and equipped with telescopes for viewing at ranges of tens of meters up to 4 km. The important findings were: (1) human viewers were able to discern terrain undulations for obstacle avoidance, (2) human viewers were able to detect depth features within the scenes that enhanced the target acquisition process over using monocular viewing,and (3) human viewers noted appreciable reduction in the distortion effects of optical turbulence over that observed through a single monocular channel. For navigation, stereo goggles were developed for headset display and simultaneous direct vision for vehicular navigation enhancement. For detection, the depth cues can be used to detect even salient target features. For optical turbulence, the human mechanisms of fusing two views into a single perceived scene can be used to provide nearly undistorted perception. These experiments show significant improvement for many applications.
Journal of Quality in Maintenance Engineering | 1996
Amjed M. Al-Ghanim; Jay B. Jordan
Quality control charts are statistical process control tools aimed at monitoring a (manufacturing) process to detect any deviations from normal operation and to aid in process diagnosis and correction. The information presented on the chart is a key to the successful implementation of a quality process correction system. Pattern recognition methodology has been pursued to identify unnatural behaviour on quality control charts. This approach provides the ability to utilize patterning information of the chart and to track back the root causes of process deviation, thus facilitating process diagnosis and maintenance. Presents analysis and development of a statistical pattern recognition system for the explicit identification of unnatural patterns on control charts. Develops a set of statistical pattern recognizers based on the likelihood ratio approach and on correlation analysis. Designs and implements a training algorithm to maximize the probability of identifying unnatural patterns, and presents a classification procedure for real‐time operation. Demonstrates the system performance using a set of newly defined measures, and obtained results based on extensive experiments illustrate the power and usefulness of the statistical approach for automating unnatural pattern detection on control charts.
SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995
Wei Wang; Zhonghao Bao; Qiang Meng; Gerald M. Flachs; Jay B. Jordan; Jeffrey J. Carlson
A new approach to human hand recognition is presented. It combines concepts from image segmentation, contour representation, wavelet transforms, and neural networks. With this approach, people are distinguished by their hands. After obtaining a persons hand contour, each finger of the hand is located and separated based on its points of sharp curvature. A two dimensional (2-D) finger contour is then mapped to a one dimensional (1-D) functional representation of the boundary called a finger signature. The wavelet transform then decomposes the finger signature signal into lower resolutions retaining the most significant features. The energy at each stage of the decomposition is calculated to extract the features of each finger. A three layer artificial neural network with back propagation training is employed to measure the performance of the wavelet transform. A database consisting of five hand images obtained from twenty-eight different people is used in the experiment. Three of the images are used for training the neural network. The other two are used for testing the algorithm. Results presented illustrate high accuracy human recognition using this scheme.
Characterization, Propagation, and Simulation of Sources and Backgrounds | 1991
Wendell R. Watkins; Daniel R. Billingsley; Fernando R. Palacios; Samuel B. Crow; Jay B. Jordan
An experimental tool and set of experiments for estimating the short and long exposure atmospheric modulation transfer function (AMTF) in the infrared are described. Measurements are presented using a new technique of simultaneously comparing close-up infrared images with optically matched distant images to isolate distortions due to atmospheric turbulence. A unique large area (1.8 m X 1.8 m) uniform temperature blackbody with spatial bar patterns is used as a target for the experiments. The AMTF is measured as a function of changing atmospheric conditions with thermally induced turbulence assessed in terms of changes in the measure of the AMTF. Additionally, imagery obtained during quiescent atmospheric conditions is used to characterize the system transfer function of the near-field imager and enhance the near-field image spatial resolution.
Image Processing, Analysis, Measurement, and Quality | 1988
Jeffrey J. Carlson; Jay B. Jordan; Gerald M. Flachs
This paper presents a mathematical basis for establishing achievable performance levels for multisensor electronic vision systems. A random process model of the multisensor scene environment is developed. The concept of feature space and its importance in the context of this model is presented. A set of complexity metrics used to measure the difficulty of an electronic vision task in a given scene environment is developed and presented. These metrics are based on the feature space used for the electronic vision task and the a priori knowledge of scene truth. Several applications of complexity metrics to the analysis of electronic vision systems are proposed.
1988 Technical Symposium on Optics, Electro-Optics, and Sensors | 1988
Gerald M. Flachs; Jay B. Jordan; Jeffrey J. Carlson
An approach is presented for designing multisensor electronic vision systems using information fusion concepts. A random process model of the multisensor scene environment provides a mathematical foundation for fusing information. A complexity metric is introduced to measure the level of difficulty associated with various vision tasks. This complexity metric provides a mathematical basis for fusing information and selecting features to minimize the complexity metric. A major result presented in the paper is a method for utilizing a priori knowledge to fuse an n-dimensional feature vector X = (X1, X2, ..., Xn) into a single feature Y while retaining the same complexity. A fusing theorem is presented that defines the class of fusing functions that retains the minimum complexity.
Proceedings of SPIE, the International Society for Optical Engineering | 1999
Wendell R. Watkins; David H. Tofsted; V. Grayson CuQlock-Knopp; Jay B. Jordan; Mohan M. Trivedi
Navigation, especially in aviation, has been plagued since its inception with the hazards ofpoor visibility conditions. Vehicular ground movement is also hampered at night or in low visibility even with night vision augmentation because of the lack of contrast and depth perception. For landing aircraft in fog, the visible and near-infrared have been discounted because of the large backscatter coefficients in favor of primarily radar that penetrates waterladen atmospheres. Aircraft outfitted with an Instrumentation Landing System (ILS) can land safely on an aircraft carrier in fog. Landing at an airport with an ILS is not safe because there is no way to detect small-scale obstacles that do not show up on radar but can cause a landing crash. We have developed and tested a technique to improve navigation through fog based on chopped active visible laser illumination and wide baseline stereo (hyperstereo) viewing with real-time image correction of backscatter radiation and forward scattering blur. The basis of the approach to developing this active hyperstereo vision system for landing aircraft in fog is outlined in the invention disclosure ofthe Army Research Laboratory (ARL) patent application ARL-97-72, filed Dec. 1997. Testing this concept required a matched pair of laser illuminators and cameras with synchronized choppers, a computer for near real-time acquisition and analysis of the hyperstereo imagery with ancillary stereo display goggles, a set of specular reflectors, and a fog generator/characterizer. The basic concept of active hyperstereo vision is to compare the imagery obtained from alternate wings ofthe aircraft while illuminating only from the opposite wing. This produces images with a backscatter radiation pattern that has an increasing gradient towards the side with the illumination source. Flipping the imagery from one wing left to right and comparing it to the opposite wing imagery will allow the backscattered radiation pattern to be subtracted from both sets of imagery. Use of specular reflectors along the sides of the runway will allow the human stereo fusion process to fuse the forward scatter blurred hyperstereo imagery of the array of specular reflectors with backscatter eliminated and allow the appropriate amount of inverse point spread function deblurring to be applied for optimum resolution of scene content (i.e., obstacles on the runway). Results of this testing will be shown.
Proceedings of SPIE | 1992
Paul A. Billings; Troy F. Giles; Jay B. Jordan; Michael K. Giles; Gerald M. Flachs
A multisensor image analysis system that locates and recognizes realistic models of military objects placed on a terrain board has been demonstrated. Images are acquired using two overhead video sensors--a wide field, low resolution camera for cueing and a narrow field, high resolution camera for object segmentation and recognition. The red, green, and blue sensor information is fused and used in the digital image analysis. Small regions of interest located within the wide field-of-view scene by a high-speed digital cuer are automatically acquired and imaged by the high resolution camera. A high-speed statistical segmenter produces a binary image of any military object found within a given region and sends it to a computer-controlled binary phase-only optical correlator for recognition. Rotation, scale and aspect invariant recognition is accomplished using a binary tree search of composite binary phase-only filters. The system can reliably recognize any one of ten different objects placed at any location and orientation on the terrain board within ten seconds.