Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Colaço is active.

Publication


Featured researches published by Andrea Colaço.


Science | 2014

First-Photon Imaging

Ahmed Kirmani; Dheera Venkatraman; Dongeek Shin; Andrea Colaço; Franco N. C. Wong; Jeffrey H. Shapiro; Vivek K Goyal

Computing an Image Firing off a burst of laser pulses and detecting the back-reflected photons is a widely used method for constructing three-dimensional (3D) images of a scene. Kirmani et al. (p. 58, published online 29 November) describe an active imaging method in which pulsed laser light raster scans a scene and a single-photon detector is used to detect the first photon of the back-reflected laser light. Exploiting spatial correlations of photons scattered from different parts of the scene allows computation of a 3D image. Importantly, for biological applications, the technique allows the laser power to be reduced without sacrificing image quality. A computational imaging method based on photon timing enables three-dimensional imaging under low light flux conditions. Imagers that use their own illumination can capture three-dimensional (3D) structure and reflectivity information. With photon-counting detectors, images can be acquired at extremely low photon fluxes. To suppress the Poisson noise inherent in low-flux operation, such imagers typically require hundreds of detected photons per pixel for accurate range and reflectivity determination. We introduce a low-flux imaging technique, called first-photon imaging, which is a computational imager that exploits spatial correlations found in real-world scenes and the physics of low-flux measurements. Our technique recovers 3D structure and reflectivity from the first detected photon at each pixel. We demonstrate simultaneous acquisition of sub–pulse duration range and 4-bit reflectivity information in the presence of high background noise. First-photon imaging may be of considerable value to both microscopy and remote sensing.


Optics Express | 2011

Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor.

Ahmed Kirmani; Andrea Colaço; Franco N. C. Wong; Vivek K Goyal

Range acquisition systems such as light detection and ranging (LIDAR) and time-of-flight (TOF) cameras operate by measuring the time difference of arrival between a transmitted pulse and the scene reflection. We introduce the design of a range acquisition system for acquiring depth maps of piecewise-planar scenes with high spatial resolution using a single, omnidirectional, time-resolved photodetector and no scanning components. In our experiment, we reconstructed 64 × 64-pixel depth maps of scenes comprising two to four planar shapes using only 205 spatially-patterned, femtosecond illuminations of the scene. The reconstruction uses parametric signal modeling to recover a set of depths present in the scene. Then, a convex optimization that exploits sparsity of the Laplacian of the depth map of a typical scene determines correspondences between spatial positions and depths. In contrast with 2D laser scanning used in LIDAR systems and low-resolution 2D sensor arrays used in TOF cameras, our experiment demonstrates that it is possible to build a non-scanning range acquisition system with high spatial resolution using only a standard, low-cost photodetector and a spatial light modulator.


user interface software and technology | 2013

Mime: compact, low power 3D gesture sensing for interaction with head mounted displays

Andrea Colaço; Ahmed Kirmani; Hye Soo Yang; Nan-Wei Gong; Chris Schmandt; Vivek K Goyal

We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction. Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.


computer vision and pattern recognition | 2012

Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity

Andrea Colaço; Ahmed Kirmani; Gregory A. Howland; John C. Howell; Vivek K Goyal

Active range acquisition systems such as light detection and ranging (LIDAR) and time-of-flight (TOF) cameras achieve high depth resolution but suffer from poor spatial resolution. In this paper we introduce a new range acquisition architecture that does not rely on scene raster scanning as in LIDAR or on a two-dimensional array of sensors as used in TOF cameras. Instead, we achieve spatial resolution through patterned sensing of the scene using a digital micromirror device (DMD) array. Our depth map reconstruction uses parametric signal modeling to recover the set of distinct depth ranges present in the scene. Then, using a convex program that exploits the sparsity of the Laplacian of the depth map, we recover the spatial content at the estimated depth ranges. In our experiments we acquired 64×64-pixel depth maps of fronto-parallel scenes at ranges up to 2.1 M using a pulsed laser, a DMD array and a single photon-counting detector. We also demonstrated imaging in the presence of unknown partially-transmissive occluders. The prototype and results provide promising directions for non-scanning, low-complexity range acquisition devices for various computer vision applications.


international conference on acoustics, speech, and signal processing | 2012

CoDAC: A compressive depth acquisition camera framework

Ahmed Kirmani; Andrea Colaço; Franco N. C. Wong; Vivek K Goyal

Light detection and ranging (LIDAR) systems use time of flight (TOF) in combination with raster scanning of the scene to form depth maps, and TOF cameras instead make TOF measurements in parallel by using an array of sensors. Here we present a framework for depth map acquisition using neither raster scanning by the illumination source nor an array of sensors. Our architecture uses a spatial light modulator (SLM) to spatially pattern a temporally-modulated light source. Then, measurements from a single omnidirectional sensor provide adequate information for depth map estimation at a resolution equal that of the SLM. Proof-of-concept experiments have verified the validity of our modeling and algorithms.


international conference on image processing | 2013

Phase unwrapping and denoising for time-of-flight imaging using generalized approximate message passing

Jonathan Mei; Ahmed Kirmani; Andrea Colaço; Vivek K Goyal

We present a new method for simultaneously denoising and unwrapping phase in multi-frequency homodyne time-of-flight ranging for the formation of accurate depth maps despite low SNR of raw measurements. This is achieved with a new generalized approximate message passing (GAMP) algorithm for minimum mean-squared error estimation of the phase. A detailed, physically-accurate acquisition model is central in achieving high accuracy, and the use of the GAMP methodology allows low computational complexity despite dense dependencies and the nonlinearity and non-Gaussianity of the acquisition model. Numerical simulations demonstrate that our integrated approach performs better than separate unwrapping followed by denoising. This performance translates to lowering the optical power consumption of time-of-flight cameras for a fixed acquisition quality.


user interface software and technology | 2013

Sensor design and interaction techniques for gestural input to smart glasses and mobile devices

Andrea Colaço

Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.


ieee global conference on signal and information processing | 2013

Parametric Poisson process imaging

Dongeek Shin; Ahmed Kirmani; Andrea Colaço; Vivek K Goyal

In conventional 3D imaging, a large number of detected photons is required at each pixel to mitigate the effect of signal-dependent Poisson or shot noise. Parametric Poisson process imaging (PPPI) is a new framework that enables scene depth acquisition with very few detected photons despite significant contribution from background light. Our proposed computational imager is based on accurate physical modeling of the photon detection process using time-inhomogeneous Poisson processes combined with regularization that promotes piecewise smoothness. Simulations demonstrate accurate imaging with only 1 detected photon per pixel.


conference on lasers and electro optics | 2013

High photon efficiency computational range imaging using spatio-temporal statistical regularization

Ahmed Kirmani; Dheera Venkatraman; Andrea Colaço; Franco N. C. Wong; Vivek K Goyal

We demonstrate 1 photon-per-pixel photon efficiency and sub-pulse-width range resolution in megapixel laser range imaging by using a joint spatio-temporal statistical processing framework and by exploiting transform-domain sparsity.


Proceedings of SPIE | 2013

Spatio-temporal regularization for range imaging with high photon efficiency

Ahmed Kirmani; Andrea Colaço; Dongeek Shin; Vivek K Goyal

Conventional depth imagers using time-of-flight methods collect hundreds to thousands of detected photons per pixel to form high-quality depth images of a scene. Through spatio-temporal regularization achieved with maximum a posteriori probability estimation under a scene prior and an inhomogeneous Poisson process likelihood function, we form depth images with dramatically higher photon efficiency even as low as one detected photon per pixel. Simulations demonstrate the combination of high accuracy and high photon efficiency of our method, compared to the traditional maximum likelihood estimate of the depth image and other popular denoising algorithms.

Collaboration


Dive into the Andrea Colaço's collaboration.

Top Co-Authors

Avatar

Ahmed Kirmani

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vivek K Goyal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Franco N. C. Wong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chris Schmandt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongeek Shin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dheera Venkatraman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hye Soo Yang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ig-Jae Kim

Kigali Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey H. Shapiro

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge