Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Glenn D. Hines is active.

Publication


Featured researches published by Glenn D. Hines.


visual information processing conference | 2004

DSP implementation of the retinex image enhancement algorithm

Glenn D. Hines; Zia-ur Rahman; Daniel J. Jobson; Glenn A. Woodell

The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.


International Symposium on Photoelectronic Detection and Imaging 2011: Laser Sensing and Imaging; and Biological and Medical Applications of Photonics Sensing and Imaging | 2011

Lidar systems for precision navigation and safe landing on planetary bodies

Farzin Amzajerdian; Diego F. Pierrottet; Larry B. Petway; Glenn D. Hines; Vincent E. Roback

The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASAis developing novel lidar sensors aimed at the needs of future planetary landing missions.These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of theterrain toindicate hazardous features such as rocks, craters, and steep slopes. The elevation maps, which arecollected during the approach phase of a landing vehicle from about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data thusenablingprecision navigation to the landing site. Our Doppler lidar utilizes three laser beams that are pointed indifferent directions to measure line-of-sight velocities and ranges to the ground from altitudes of over 2 km.Starting at altitudes of about 20km and throughout the landing trajectory,the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehiclesnavigation system. Betweenaltitudesof approximately 15 km and 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters to perform Terrain relative Navigation thus further reducing the vehicles relative position error. This paper describes the operational capabilities of each lidar sensorand provides a status of their development.


visual information processing conference | 2006

A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

Daniel J. Jobson; Zia-ur Rahman; Glenn A. Woodell; Glenn D. Hines

Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Centers research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.


Proceedings of SPIE | 2005

Image Enhancement, Image Quality, and Noise

Zia-ur Rahman; Daniel J. Jobson; Glenn A. Woodell; Glenn D. Hines

The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.


visual information processing conference | 2006

Advanced image processing of aerial imagery

Glenn A. Woodell; Daniel J. Jobson; Zia-ur Rahman; Glenn D. Hines

Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at NASAs Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langleys Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.


Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense IV | 2005

Enhancement of imagery in poor visibility conditions

Glenn A. Woodell; Daniel J. Jobson; Zia-ur Rahman; Glenn D. Hines

Current still image and video systems are typically of limited use in poor visibility conditions such as in rain, fog, smoke, and haze. These conditions severely limit the range and effectiveness of imaging systems because of the severe reduction in contrast. The NASA Langley Research Center’s Visual Information Processing Group has developed an image enhancement technology based on the concept of a visual servo that has direct applications to the problem of poor visibility conditions. This technology has been used in cases of severe image turbidity in air as well as underwater with dramatic results. Use of this technology could result in greatly improved performance of perimeter surveillance systems, military, security, and law enforcement operations, port security, both on land and below water, and air and sea rescue services, resulting in improved public safety.


visual information processing conference | 2002

Multisensor fusion and enhancement using the Retinex image enhancement algorithm

Zia-ur Rahman; Daniel J. Jobson; Glenn A. Woodell; Glenn D. Hines

A new approach to sensor fusion and enhancement is presented. The retinex image enhancement algorithm is used to jointly enhance and fuse data from long wave infrared, short wave infrared and visible wavelength sensors. This joint optimization results in fused data which contains more information than any of the individual data streams. This is especially true in turbid weather conditions, where the long wave infrared sensor would conventionally be the only source of usable information. However, the retinex algorithm can be used to pull out the details from the other data streams as well, resulting in greater overall information. The fusion uses the multiscale nature of the algorithm to both enhance and weight the contributions of the different data streams forming a single output data stream.


Proceedings of SPIE | 2011

Navigation Doppler lidar sensor for precision altitude and vector velocity measurements: flight test results

Diego F. Pierrottet; Farzin Amzajerdian; Larry B. Petway; Bruce W. Barnes; George E. Lockard; Glenn D. Hines

An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high-resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over various terrains. The sensor was one of several sensors tested in this field test by NASAs Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.


Proceedings of SPIE | 2013

Doppler lidar sensor for precision navigation in GPS-deprived environment

Farzin Amzajerdian; Diego F. Pierrottet; Glenn D. Hines; Larry B. Petway; Bruce W. Barnes

Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle’s Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Real-time Enhanced Vision System

Glenn D. Hines; Zia-ur Rahman; Daniel J. Jobson; Glenn A. Woodell; Steven D. Harrah

Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

Collaboration


Dive into the Glenn D. Hines's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Carson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge