Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard M. Heinrichs is active.

Publication


Featured researches published by Richard M. Heinrichs.


Applied Optics | 2002

Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser

Marius A. Albota; Richard M. Heinrichs; David G. Kocher; Daniel G. Fouche; Brian E. Player; Michael E. O'Brien; Brian F. Aull; John J. Zayhowski; James G. Mooney; Berton C. Willard; Robert R. Carlson

We have developed a threedimensional imaging laser radar featuring 3-cm range resolution and single-photon sensitivity. This prototype direct-detection laser radar employs compact, all-solid-state technology for the laser and detector array. The source is a Nd:YAG microchip laser that is diode pumped, passively Q-switched, and frequency doubled. The detector is a gated, passively quenched, two-dimensional array of silicon avalanche photodiodes operating in Geigermode. After describing the system in detail, we present a three-dimensional image, derive performance characteristics, and discuss our plans for future imaging three-dimensional laser radars.


Proceedings of SPIE | 2001

Three-dimensional laser radar with APD arrays

Richard M. Heinrichs; Brian F. Aull; Richard M. Marino; Daniel G. Fouche; Alexander K. Mcintosh; John J. Zayhowski; Timothy Stephens; Michael E. O'Brien; Marius A. Albota

MIT Lincoln Laboratory is actively developing laser and detector technologies that make it possible to build a 3D laser radar with several attractive features, including capture of an entire 3D image on a single laser pulse, tens of thousands of pixels, few-centimeter range resolution, and small size, weight, and power requirements. The laser technology is base don diode-pumped solid-state microchip lasers that are passively Q-switched. The detector technology is based on Lincoln-built arrays of avalanche photodiodes operating in the Geiger mode, with integrated timing circuitry for each pixel. The advantage of these technologies is that they offer the potential for small, compact, rugged, high-performance systems which are critical for many applications.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

High-resolution 3D imaging laser radar flight test experiments

Richard M. Marino; W. R. Davis; G. C. Rich; Joseph McLaughlin; E. I. Lee; Byron Stanley; J. W. Burnside; Gregory S. Rowe; Robert Hatch; T. E. Square; Luke J. Skelly; Michael E. O'Brien; Alexandru N. Vasile; Richard M. Heinrichs

Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode simplifies the signal processing by eliminating the need for analog-to-digital converters and non-linearity corrections. With appropriate optics, the 32x32 array of digital time values represents a 3-D spatial image frame of the scene. Successive image frames illuminated with the multi-kilohertz pulse repetition rate laser are accumulated into range histograms to provide 3-D volume and intensity information. In this article, we describe the Jigsaw program goals, our demonstration sensor system, the data collection campaigns, and show examples of 3-D imaging with foliage and camouflage penetration. Other applications for this 3-D imaging direct-detection ladar technology include robotic vision, avigation of autonomous vehicles, manufacturing quality control, industrial security, and topography.


Laser Radar Technology and Applications XII | 2007

Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration

Mohan Vaidyanathan; Steven G. Blask; Thomas Higgins; William Clifton; Daniel Davidsohn; Ryan Carson; Van Reynolds; Joanne Pfannenstiel; Richard Cannata; Richard M. Marino; John Drover; Robert Hatch; David Schue; Robert E. Freehart; Greg Rowe; James G. Mooney; Carl Hart; Byron Stanley; Joseph McLaughlin; Eui-In Lee; Jack Berenholtz; Brian F. Aull; John J. Zayhowski; Alex Vasile; Prem Ramaswami; Kevin Ingersoll; Thomas Amoruso; Imran Khan; William M. Davis; Richard M. Heinrichs

Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1 manned helicopter, which served as a surrogate platform for the purpose of data collection and system validation. In this paper, we discuss the results from the ground integration and testing of the system, and the results from UH-1 flight data collections. We also discuss the performance results of the system obtained using ladar calibration targets.


Proceedings of SPIE | 2010

Terrain classification of ladar data over Haitian urban environments using a lower envelope follower and adaptive gradient operator

Amy L. Neuenschwander; Melba M. Crawford; Lori A. Magruder; Christopher Weed; Richard Cannata; Dale G. Fried; Robert Knowlton; Richard M. Heinrichs

In response to the 2010 Haiti earthquake, the ALIRT ladar system was tasked with collecting surveys to support disaster relief efforts. Standard methodologies to classify the ladar data as ground, vegetation, or man-made features failed to produce an accurate representation of the underlying terrain surface. The majority of these methods rely primarily on gradient- based operations that often perform well for areas with low topographic relief, but often fail in areas of high topographic relief or dense urban environments. An alternative approach based on a adaptive lower envelope follower (ALEF) with an adaptive gradient operation for accommodating local slope and roughness was investigated for recovering the ground surface from the ladar data. This technique was successful for classifying terrain in the urban and rural areas of Haiti over which the ALIRT data had been acquired.


applied imagery pattern recognition workshop | 2006

Automatic Alignment of Color Imagery onto 3D Laser Radar Data

Alexandru N. Vasile; Frederick R. Waugh; Daniel Greisokh; Richard M. Heinrichs

We present an algorithm for the automatic fusion of city-sized, 2D color imagery to 3D laser radar imagery collected from distinct airborne platforms at different times. Our approach is to derive pseudo-intensity images from ladar imagery and to align these with color imagery using conventional 2D registration algorithms. To construct a pseudo-intensity image, the algorithm uses the color imagerys time of day and location to predict shadows in the 3D image, then determines ambient and sun lighting conditions by histogram matching the 3D-derived shadowed and non-shadowed regions to their 2D counterparts. A projection matrix is computed to bring the pseudo- image into 2D image coordinates, resulting in an initial alignment of the imagery to within 200 meters. Finally, the 2D intensity image and 3D generated pseudo-intensity image are registered using a modified normalized correlation algorithm to solve for rotation, translation, scale and lens distortion, resulting in a fused data set that is aligned to within 1 meter. Applications of the presented work include the areas of augmented reality and scene interpretation for persistent surveillance in heavily cluttered and occluded environments.


Proceedings of SPIE | 1996

Measurements of aircraft wake vortices at Memphis International Airport with a cw CO2 coherent laser radar

Richard M. Heinrichs; Timothy J. Dasey; Michael P. Matthews; Steven D. Campbell; Robert E. Freehart; Glenn H. Perras; Philippe Salamitou

A CW-coherent laser radar using a 20-watt CO2 laser has been constructed and deployed for the measurement of wake-vortex turbulence. This effort is part of the NASA Terminal Area Productivity Program and has the goal of providing information to further the understanding of the motion and decay of wake vortices as influenced by the local atmospheric conditions. To meet this goal, vortex measurements are made with the lidar along with simultaneous measurements from a suite of meteorological sensors which include a 150 foot instrumented tower, a profiler/RASS, sodar and balloon soundings. The information collected also includes airline flight data and beacon data. The operation of the lidar during two field deployments at Memphis International Airport are described as well as examples of vortex motion and decay measurements in various atmospheric conditions.


37th Aerospace Sciences Meeting and Exhibit | 1999

Vortex and meteorological measurements at Dallas/Ft. Worth airport

Rose Joseph; Timothy J. Dasey; Richard M. Heinrichs

As part of NASA’s Aircraft Vortex Spacing System (AVOSS), Lincoln Laboratory conducted meteorological and wake vortex data collections at Dallas/Ft. Worth (DFW) airport in 1997. A mobile continuous-wave coherent CO2 laser radar was utilized to detect and track vortices generated by landing aircraft. Associated meteorological data were acquired by an extensive array of weather sensors. The DFW deployment is described here along with a preliminary analysis of vortex data. Vortex measurements from a 1995 lidar deployment in Memphis are also included in the analysis.


international symposium on visual computing | 2012

Advanced Coincidence Processing of 3D Laser Radar Data

Alexandru N. Vasile; Luke J. Skelly; Michael E. O’Brien; Dan G. Fouche; Richard M. Marino; Robert Knowlton; M. Jalal Khan; Richard M. Heinrichs

Data collected by 3D Laser Radar (Lidar) systems, which utilize arrays of avalanche photo-diode detectors operating in either Linear or Geiger mode, may include a large number of false detector counts or noise from temporal and spatial clutter. We present an improved algorithm for noise removal and signal detection, called Multiple-Peak Spatial Coincidence Processing (MPSCP). Field data, collected using an airborne Lidar sensor in support of the 2010 Haiti earthquake operations, were used to test the MPSCP algorithm against current state-of-the-art, Maximum A-posteriori Coincidence Processing (MAPCP). Qualitative and quantitative results are presented to determine how well each algorithm removes image noise while preserving signal and reconstructing the best estimate of the underlying 3D scene. The MPSCP algorithm is shown to have 9x improvement in signal-to-noise ratio, a 2-3x improvement in angular and range resolution, a 21% improvement in ground detection and a 5.9x improvement in computational efficiency compared to MAPCP.


international symposium on visual computing | 2011

Efficient city-sized 3D reconstruction from ultra-high resolution aerial and ground video imagery

Alexandru N. Vasile; Luke J. Skelly; Karl S. Ni; Richard M. Heinrichs; Octavia I. Camps

This paper introduces an approach for geo-registered, dense 3D reconstruction of city-sized scenes using a combination of ultra-high resolution aerial and ground video imagery. While 3D reconstructions from ground imagery provide high-detail street-level views of a city, they do not completely cover the entire city scene and might have distortions due to GPS drift. Such a reconstruction can be complemented by aerial imagery to capture missing scene surfaces as well as improve geo-registration. We present a computationally efficient method for 3D reconstruction of city-sized scenes using both aerial and ground video imagery to obtain a more complete and self-consistent georegistered 3D city model. The reconstruction results of a 1×1km city area, covered with a 66 Mega-pixel airborne system along with a 60 Mega-pixel ground camera system, are presented and validated to geo-register to within 3m to prior airborne-collected LiDAR data.

Collaboration


Dive into the Richard M. Heinrichs's collaboration.

Top Co-Authors

Avatar

Richard M. Marino

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian F. Aull

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John J. Zayhowski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael E. O'Brien

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel G. Fouche

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David G. Kocher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandru N. Vasile

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James G. Mooney

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marius A. Albota

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Hatch

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge