Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard M. Marino is active.

Publication


Featured researches published by Richard M. Marino.


Laser Radar Technology and Applications VIII | 2003

A compact 3D imaging laser radar system using Geiger-mode APD arrays: system and measurements

Richard M. Marino; Timothy Stephens; Robert Hatch; Joseph McLaughlin; James G. Mooney; Michael E. O'Brien; Gregory S. Rowe; Joseph S. Adams; Luke J. Skelly; Robert Knowlton; Stephen E. Forman; W. Robert Davis

MIT Lincoln Laboratory continues the development of novel high-resolution 3D imaging laser radar technology and sensor systems. The sensor system described in detail here uses a passively Q-switched solid-state frequency-doubled Nd:YAG laser to transmit short laser pulses (~ 700 ps FWHM) at 532 nm wavelength and derive the range to target surface element by measuring the time-of-flight for each pixel. The single photoelectron detection efficiency has been measured to be > 20 % using these Silicon Geiger-mode APDs at room temperature. The pulse out of the detector is used to stop a > 500 MHz digital clock integrated within the focal-plane array. With appropriate optics, the 32x32 array of digital time values represents a 3D spatial image frame of the scene. Successive image frames from the multi-kilohertz pulse repetition rate laser pulses are accumulated into range histograms to provide 3D volume and intensity information. In this paper, we report on a prototype sensor system, which has recently been developed using new 32x32 arrays of Geiger-mode APDs with 0.35 μm CMOS digital timing circuits at each pixel. Here we describe the sensor system development and present recent measurements of laboratory test data and field imagery.


Proceedings of SPIE | 2001

Three-dimensional laser radar with APD arrays

Richard M. Heinrichs; Brian F. Aull; Richard M. Marino; Daniel G. Fouche; Alexander K. Mcintosh; John J. Zayhowski; Timothy Stephens; Michael E. O'Brien; Marius A. Albota

MIT Lincoln Laboratory is actively developing laser and detector technologies that make it possible to build a 3D laser radar with several attractive features, including capture of an entire 3D image on a single laser pulse, tens of thousands of pixels, few-centimeter range resolution, and small size, weight, and power requirements. The laser technology is base don diode-pumped solid-state microchip lasers that are passively Q-switched. The detector technology is based on Lincoln-built arrays of avalanche photodiodes operating in the Geiger mode, with integrated timing circuitry for each pixel. The advantage of these technologies is that they offer the potential for small, compact, rugged, high-performance systems which are critical for many applications.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

High-resolution 3D imaging laser radar flight test experiments

Richard M. Marino; W. R. Davis; G. C. Rich; Joseph McLaughlin; E. I. Lee; Byron Stanley; J. W. Burnside; Gregory S. Rowe; Robert Hatch; T. E. Square; Luke J. Skelly; Michael E. O'Brien; Alexandru N. Vasile; Richard M. Heinrichs

Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode simplifies the signal processing by eliminating the need for analog-to-digital converters and non-linearity corrections. With appropriate optics, the 32x32 array of digital time values represents a 3-D spatial image frame of the scene. Successive image frames illuminated with the multi-kilohertz pulse repetition rate laser are accumulated into range histograms to provide 3-D volume and intensity information. In this article, we describe the Jigsaw program goals, our demonstration sensor system, the data collection campaigns, and show examples of 3-D imaging with foliage and camouflage penetration. Other applications for this 3-D imaging direct-detection ladar technology include robotic vision, avigation of autonomous vehicles, manufacturing quality control, industrial security, and topography.


Automatic target recognition. Conference | 2004

Pose-independent automatic target detection and recognition using 3D LADAR data

Alexandru N. Vasile; Richard M. Marino

We present a pose-independent Automatic Target Detection and Recognition (ATD/R) System using data from an airborne 3D imaging ladar sensor. The ATD/R system uses geometric shape and size signatures from target models to detect and recognize targets under heavy canopy and camouflage cover in extended terrain scenes. A method for data integration was developed to register multiple scene views to obtain a more complete 3-D surface signature of a target. Automatic target detection was performed using the general approach of “3-D cueing,” which determines and ranks regions of interest within a large-scale scene based on the likelihood that they contain the respective target. Each region of interest is further analyzed to accurately identify the target from among a library of 10 candidate target objects. The system performance was demonstrated on five extended terrain scenes with targets both out in the open and under heavy canopy cover, where the target occupied 1 to 5% of the scene by volume. Automatic target recognition was successfully demonstrated for 20 measured data scenes including ground vehicle targets both out in the open and under heavy canopy and/or camouflage cover, where the target occupied between 5 to 10% of the scene by volume. Correct target identification was also demonstrated for targets with multiple movable parts that are in arbitrary orientations. We achieved a high recognition rate (over 99%) along with a low false alarm rate (less than 0.01%)


Laser Radar Technology and Applications XII | 2007

Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration

Mohan Vaidyanathan; Steven G. Blask; Thomas Higgins; William Clifton; Daniel Davidsohn; Ryan Carson; Van Reynolds; Joanne Pfannenstiel; Richard Cannata; Richard M. Marino; John Drover; Robert Hatch; David Schue; Robert E. Freehart; Greg Rowe; James G. Mooney; Carl Hart; Byron Stanley; Joseph McLaughlin; Eui-In Lee; Jack Berenholtz; Brian F. Aull; John J. Zayhowski; Alex Vasile; Prem Ramaswami; Kevin Ingersoll; Thomas Amoruso; Imran Khan; William M. Davis; Richard M. Heinrichs

Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1 manned helicopter, which served as a surrogate platform for the purpose of data collection and system validation. In this paper, we discuss the results from the ground integration and testing of the system, and the results from UH-1 flight data collections. We also discuss the performance results of the system obtained using ladar calibration targets.


Applied Optics | 2011

Laser vibrometry from a moving ground vehicle.

Leaf A. Jiang; Marius A. Albota; Robert W. Haupt; Justin G. Chen; Richard M. Marino

We investigated the fundamental limits to the performance of a laser vibrometer that is mounted on a moving ground vehicle. The noise floor of a moving laser vibrometer consists of speckle noise, shot noise, and platform vibrations. We showed that speckle noise can be reduced by increasing the laser spot size and that the noise floor is dominated by shot noise at high frequencies (typically greater than a few kilohertz for our system). We built a five-channel, vehicle-mounted, 1.55 μm wavelength laser vibrometer to measure its noise floor at 10 m horizontal range while driving on dirt roads. The measured noise floor agreed with our theoretical estimates. We showed that, by subtracting the response of an accelerometer and an optical reference channel, we could reduce the excess noise (in units of micrometers per second per Hz(1/2)) from vehicle vibrations by a factor of up to 33, to obtain nearly speckle-and-shot-noise-limited performance from 0.3 to 47 kHz.


O-E/Fiber LASE '88 | 1989

Tomographic Image Reconstruction From Laser Radar Reflective Projections

Richard M. Marino; R. N. Capes; W. E. Keicher; Sanjeev R. Kulkarni; J. K. Parker; L. W. Swezey; J. R. Senning; M. F. Reiley; E. B. Craig

Image reconstruction from projections has been extensively developed in the medical field. For example, Computer Assisted Tomography (CAT) scanners measures the absorption of X-rays along ray-projections through a slice of a body. Applying reconstruction algorithms to these projection measurements leads to two or three-dimensional distribution of the mass density. We have applied similar methods which demonstrate image reconstruction from reflective projections obtained with laser radars. These baseband SAR techniques can be applied to any N-dimensional measurements resulting in reconstructions of N 1-dimensional images. For example, two-dimensional range-Doppler or angle-angle images taken from several views can be reconstructed into three-dimensional images. Shown in this paper, two-dimensional images have been reconstructed from one-dimensional range-time-intensity (RTI) data, Doppler-time-intensity (DTI) data, or a combination of both types of laser radar measurements. Typical RTI measurements can be obtained with either coherent linear FM waveforms, or with incoherent short pulse waveforms. In the DTI case, a cw waveform is sufficient and leads to true narrow-band imaging. We have applied these methods to both computer simulated data and field measurements, at both 10.6 μm and 0.53 μm wavelenghts, using various test targets.


Laser radar technology and applications. Conference | 2004

Obscuration measurements of tree canopy structure using a 3D imaging ladar system

Richard Cannata; William Clifton; Steven G. Blask; Richard M. Marino

Recently-developed airborne imaging laser radar systems are capable of rapidly collecting accurate and precise spatial information for topographic characterization as well as surface imaging. However, the performance of airborne ladar (laser detection and ranging) collection systems often depends upon the density and distribution of tree canopy over the area of interest, which obscures the ground and objects close to the ground such as buildings or vehicles. Traditionally, estimates of canopy obscuration are made using ground-based methods, which are time-consuming, valid only for a small area and specific collection geometries when collecting data from an airborne platform. Since ladar systems are capable of collecting a spatially and temporally dense set of returns in 3D space, the return reflections can be used to differentiate and monitor the density of ground and tree canopy returns in order to measure, in near real-time, sensor performance for any arbitrary collection geometry or foliage density without relying on ground based measurements. Additionally, an agile airborne ladar collection system could utilize prior estimates of the degree and spatial distribution of the tree canopy for a given area in order to determine optimal geometries for future collections. In this paper, we report on methods to rapidly quantify the magnitude and distribution of the spatial structure of obscuring canopy for a series of airborne high-resolution imaging ladar collections in a mature, mixed deciduous forest.


international symposium on visual computing | 2012

Advanced Coincidence Processing of 3D Laser Radar Data

Alexandru N. Vasile; Luke J. Skelly; Michael E. O’Brien; Dan G. Fouche; Richard M. Marino; Robert Knowlton; M. Jalal Khan; Richard M. Heinrichs

Data collected by 3D Laser Radar (Lidar) systems, which utilize arrays of avalanche photo-diode detectors operating in either Linear or Geiger mode, may include a large number of false detector counts or noise from temporal and spatial clutter. We present an improved algorithm for noise removal and signal detection, called Multiple-Peak Spatial Coincidence Processing (MPSCP). Field data, collected using an airborne Lidar sensor in support of the 2010 Haiti earthquake operations, were used to test the MPSCP algorithm against current state-of-the-art, Maximum A-posteriori Coincidence Processing (MAPCP). Qualitative and quantitative results are presented to determine how well each algorithm removes image noise while preserving signal and reconstructing the best estimate of the underlying 3D scene. The MPSCP algorithm is shown to have 9x improvement in signal-to-noise ratio, a 2-3x improvement in angular and range resolution, a 21% improvement in ground detection and a 5.9x improvement in computational efficiency compared to MAPCP.


Proceedings of SPIE | 2009

Photon-Counting Lidar for Aerosol Detection and 3-D Imaging §

Richard M. Marino; Jonathan M. Richardson; Robert Garnier; David B. Ireland; Laura J. Bickmeier; Christina Siracusa; Patrick Quinn

Laser-based remote sensing is undergoing a remarkable advance due to novel technologies developed at MIT Lincoln Laboratory. We have conducted recent experiments that have demonstrated the utility of detecting and imaging low-density aerosol clouds. The Mobile Active Imaging LIDAR (MAIL) system uses a Lincoln Laboratory-developed microchip laser to transmit short pulses at 14-16 kHz Pulse Repetition Frequency (PRF), and a Lincoln Laboratory-developed 32x32 Geiger-mode Avalanche-Photodiode Detector (GmAPD) array for singlephoton counting and ranging. The microchip laser is a frequency-doubled passively Q-Switched Nd:YAG laser providing an average transmitted power of less than 64 milli-Watts. When the avalanche photo-diodes are operated in the Geiger-mode, they are reverse-biased above the breakdown voltage for a time that corresponds to the effective range-gate or range-window of interest. The time-of-flight, and therefore range, is determined from the measured laser transmit time and the digital time value from each pixel. The optical intensity of the received pulse is not measured because the GmAPD is saturated by the electron avalanche. Instead, the reflectivity of the scene, or relative density of aerosols in this case, is determined from the temporally and/or spatially analyzed detection statistics.

Collaboration


Dive into the Richard M. Marino's collaboration.

Top Co-Authors

Avatar

Richard M. Heinrichs

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael E. O'Brien

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John J. Zayhowski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian F. Aull

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel G. Fouche

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Hatch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandru N. Vasile

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory S. Rowe

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James G. Mooney

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marius A. Albota

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge