Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee V. Streeter is active.

Publication


Featured researches published by Lee V. Streeter.


international conference on computer graphics and interactive techniques | 2013

Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles

Achuta Kadambi; Refael Whyte; Ayush Bhandari; Lee V. Streeter; Christopher Barsi; Adrian A. Dorrington; Ramesh Raskar

Time of flight cameras produce real-time range maps at a relatively low cost using continuous wave amplitude modulation and demodulation. However, they are geared to measure range (or phase) for a single reflected bounce of light and suffer from systematic errors due to multipath interference. We re-purpose the conventional time of flight device for a new goal: to recover per-pixel sparse time profiles expressed as a sequence of impulses. With this modification, we show that we can not only address multipath interference but also enable new applications such as recovering depth of near-transparent surfaces, looking through diffusers and creating time-profile movies of sweeping light. Our key idea is to formulate the forward amplitude modulated light propagation as a convolution with custom codes, record samples by introducing a simple sequence of electronic time delays, and perform sparse deconvolution to recover sequences of Diracs that correspond to multipath returns. Applications to computer vision include ranging of near-transparent objects and subsurface imaging through diffusers. Our low cost prototype may lead to new insights regarding forward and inverse problems in light transport.


Proceedings of SPIE | 2011

Separating true range measurements from multi-path and scattering interference in commercial range cameras

Adrian A. Dorrington; John Peter Godbaz; Michael J. Cree; Andrew D. Payne; Lee V. Streeter

Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.


Time-of-Flight and Depth Imaging | 2013

Technical Foundation and Calibration Methods for Time-of-Flight Cameras

Damien Lefloch; Rahul Nair; Frank Lenzen; Henrik Schäfer; Lee V. Streeter; Michael J. Cree; Reinhard Koch; Andreas Kolb

Current Time-of-Flight approaches mainly incorporate an continuous wave intensity modulation approach. The phase reconstruction is performed using multiple phase images with different phase shifts which is equivalent to sampling the inherent correlation function at different locations. This active imaging approach delivers a very specific set of influences, on the signal processing side as well as on the optical side, which all have an effect on the resulting depth quality. Applying ToF information in real application therefore requires to tackle these effects in terms of specific calibration approaches. This survey gives an overview over the current state of the art in ToF sensor calibration.


Applied Optics | 2009

Optical full Hadamard matrix multiplexing and noise effects

Lee V. Streeter; G. R. Burling-Claridge; Michael J. Cree; Rainer Künnemeyer

Hadamard multiplexing provides a considerable SNR boost over additive random noise but Poisson noise such as photon noise reduces the boost. We develop the theory for full H-matrix Hadamard transform imaging under additive and Poisson noise effects. We show that H-matrix encoding results in no effect on average on the noise level due to Poisson noise sources while preferentially reducing additive noise. We use this result to explain the wavelength-dependent varying SNR boost in a Hadamard hyperspectral imager and argue that such a preferential boost is useful when the main noise source is indeterminant or varying.


Applied Optics | 2015

Application of lidar techniques to time-of-flight range imaging.

Refael Whyte; Lee V. Streeter; Michael J. Cree; Adrian A. Dorrington

Amplitude-modulated continuous wave (AMCW) time-of-flight (ToF) range imaging cameras measure distance by illuminating the scene with amplitude-modulated light and measuring the phase difference between the transmitted and reflected modulation envelope. This method of optical range measurement suffers from errors caused by multiple propagation paths, motion, phase wrapping, and nonideal amplitude modulation. In this paper a ToF camera is modified to operate in modes analogous to continuous wave (CW) and stepped frequency continuous wave (SFCW) lidar. In CW operation the velocity of objects can be measured. CW measurement of velocity was linear with true velocity (R2=0.9969). Qualitative analysis of a complex scene confirms that range measured by SFCW is resilient to errors caused by multiple propagation paths, phase wrapping, and nonideal amplitude modulation which plague AMCW operation. In viewing a complicated scene through a translucent sheet, quantitative comparison of AMCW with SFCW demonstrated a reduction in the median error from -1.3  m to -0.06  m with interquartile range of error reduced from 4.0 m to 0.18 m.


Journal of Near Infrared Spectroscopy | 2007

Visible/Near Infrared Hyperspectral Imaging via Spatial Illumination Source Modulation

Lee V. Streeter; G.R. Burling-Claridge; Michael J. Cree; Rainer Künnemeyer

A spatially-modulated light encoding system for hyperspectral imaging is presented. A digital micromirror array was used for the optical switching in an illumination system that otherwise resembled a traditional slide projector. The Hadamard encoding scheme was employed in a purely spatial, non wavelength dispersive manner. A single point linear diode array spectrometer measured the spectral dimension. To demonstrate the principle two simple samples were imaged.


image and vision computing new zealand | 2008

Comparison of Hadamard imaging and compressed sensing for low resolution hyperspectral imaging

Lee V. Streeter; G.R. Burling-Claridge; Michael J. Cree; Rainer Künnemeyer

Image multiplexing is the technique of using combination patterns to measure multiple pixels with one sensor. Hyperspectral imaging is acquiring images with full spectra at each pixel. Using a single point spectrometer and light modulation we perform multiplexed hyperspectral imaging. We compare two forms of multiplexing, namely Hadamard imaging and compressed sensing, at low resolution. We show that Hadamard imaging is the more accurate and precise method. The primary benefit of compressed sensing is that generally a reduced number of acquisitions are necessary for accurate reconstruction. Reasonable reconstruction was achieved with compressed sensing. For example at approximately three fifths the number of measurements for Hadamard imaging, the SNR of compressed sensing approached that of Hadamard imaging with about 15% reconstruction error.


electronic imaging | 2008

Reference Beam Method for Source Modulated Hadamard Multiplexing

Lee V. Streeter; G. Robert Burling-Claridge; Michael J. Cree; Rainer Künnemeyer

A hyperspectral imaging system is in development. The system uses spatially modulated Hadamard patterns to encode image information with implicit stray and ambient light correction and a reference beam to correct for source light changes over the spectral image capture period. In this study we test the efficacy of the corrections and the multiplex advantage for our system. The signal to noise ratio (SNR) was used to demonstrate the advantage of spatial multiplexing in the system and observe the effect of the reference beam correction. The statistical implications of the data acquisition technique, illumination source drift and correction of such drift, were derived. The reference beam correction was applied per spectrum before Hadamard decoding and alternately after decoding to all spectra in the image. The reference beam method made no fundamental change to SNR, therefore we conclude that light source drift is minimal and other possibly rectifiable error sources are dominant. The multiplex advantage was demonstrated ranging from a minimum SNR boost of 1.5 (600-975 nm) to a maximum of 11 (below 500 nm). Intermediate SNR boost was observed in 975-1700 nm. The large variation in SNR boost is also due to some other error source.


Optical Engineering | 2015

Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods

Refael Whyte; Lee V. Streeter; Michael J. Cree; Adrian A. Dorrington

Abstract. Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136  pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.


Optical Engineering | 2014

Coded exposure correction of transverse motion in full-field range imaging

Lee V. Streeter; Adrian A. Dorrington

Time-of-flight range imaging systems typically require several raw image frames to produce one range, or depth, image. The problem of motion blur in traditional imaging is compounded in time-of-flight imaging because motion between these raw frames leads to invalid data. The use of the coded exposure and optical flow techniques are investigated together for correcting both motion blur within each frame and errors arising due to changes between frames. Examples of the motion correction in real range measurements are also given, along with comparisons to reference data, which shows a significant improvement over noncorrected output.

Collaboration


Dive into the Lee V. Streeter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayush Bhandari

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Achuta Kadambi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher Barsi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge