Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahmed Kirmani is active.

Publication


Featured researches published by Ahmed Kirmani.


Science | 2014

First-Photon Imaging

Ahmed Kirmani; Dheera Venkatraman; Dongeek Shin; Andrea Colaço; Franco N. C. Wong; Jeffrey H. Shapiro; Vivek K Goyal

Computing an Image Firing off a burst of laser pulses and detecting the back-reflected photons is a widely used method for constructing three-dimensional (3D) images of a scene. Kirmani et al. (p. 58, published online 29 November) describe an active imaging method in which pulsed laser light raster scans a scene and a single-photon detector is used to detect the first photon of the back-reflected laser light. Exploiting spatial correlations of photons scattered from different parts of the scene allows computation of a 3D image. Importantly, for biological applications, the technique allows the laser power to be reduced without sacrificing image quality. A computational imaging method based on photon timing enables three-dimensional imaging under low light flux conditions. Imagers that use their own illumination can capture three-dimensional (3D) structure and reflectivity information. With photon-counting detectors, images can be acquired at extremely low photon fluxes. To suppress the Poisson noise inherent in low-flux operation, such imagers typically require hundreds of detected photons per pixel for accurate range and reflectivity determination. We introduce a low-flux imaging technique, called first-photon imaging, which is a computational imager that exploits spatial correlations found in real-world scenes and the physics of low-flux measurements. Our technique recovers 3D structure and reflectivity from the first detected photon at each pixel. We demonstrate simultaneous acquisition of sub–pulse duration range and 4-bit reflectivity information in the presence of high background noise. First-photon imaging may be of considerable value to both microscopy and remote sensing.


Optics Express | 2011

Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor.

Ahmed Kirmani; Andrea Colaço; Franco N. C. Wong; Vivek K Goyal

Range acquisition systems such as light detection and ranging (LIDAR) and time-of-flight (TOF) cameras operate by measuring the time difference of arrival between a transmitted pulse and the scene reflection. We introduce the design of a range acquisition system for acquiring depth maps of piecewise-planar scenes with high spatial resolution using a single, omnidirectional, time-resolved photodetector and no scanning components. In our experiment, we reconstructed 64 × 64-pixel depth maps of scenes comprising two to four planar shapes using only 205 spatially-patterned, femtosecond illuminations of the scene. The reconstruction uses parametric signal modeling to recover a set of depths present in the scene. Then, a convex optimization that exploits sparsity of the Laplacian of the depth map of a typical scene determines correspondences between spatial positions and depths. In contrast with 2D laser scanning used in LIDAR systems and low-resolution 2D sensor arrays used in TOF cameras, our experiment demonstrates that it is possible to build a non-scanning range acquisition system with high spatial resolution using only a standard, low-cost photodetector and a spatial light modulator.


international conference on computer vision | 2009

Looking around the corner using transient imaging

Ahmed Kirmani; Tyler Hutchison; James Davis; Ramesh Raskar

We show that multi-path analysis using images from a timeof-flight (ToF) camera provides a tantalizing opportunity to infer about 3D geometry of not only visible but hidden parts of a scene. We provide a novel framework for reconstructing scene geometry from a single viewpoint using a camera that captures a 3D time-image I(x, y, t) for each pixel. We propose a framework that uses the time-image and transient reasoning to expose scene properties that may be beyond the reach of traditional computer vision. We corroborate our theory with free space hardware experiments using a femtosecond laser and an ultrafast photo detector array. The ability to compute the geometry of hidden elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.


user interface software and technology | 2013

Mime: compact, low power 3D gesture sensing for interaction with head mounted displays

Andrea Colaço; Ahmed Kirmani; Hye Soo Yang; Nan-Wei Gong; Chris Schmandt; Vivek K Goyal

We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction. Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.


international conference on multimedia and expo | 2013

SPUMIC: Simultaneous phase unwrapping and multipath interference cancellation in time-of-flight cameras using spectral methods

Ahmed Kirmani; Arrigo Benedetti; Philip A. Chou

We propose a framework for simultaneous phase unwrapping and multipath interference cancellation (SPUMIC) in homodyne time-of-flight (ToF) cameras. Our multi-frequency acquisition framework is based on parametric modeling of the multipath interference phenomena. We use robust spectral estimation methods with low computational complexity to detect and estimate multipath parameters. Using simulations and analysis we demonstrate that our proposed solution is implementable in real-time on existing ToF cameras without requiring any hardware modifications.


computer vision and pattern recognition | 2012

Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity

Andrea Colaço; Ahmed Kirmani; Gregory A. Howland; John C. Howell; Vivek K Goyal

Active range acquisition systems such as light detection and ranging (LIDAR) and time-of-flight (TOF) cameras achieve high depth resolution but suffer from poor spatial resolution. In this paper we introduce a new range acquisition architecture that does not rely on scene raster scanning as in LIDAR or on a two-dimensional array of sensors as used in TOF cameras. Instead, we achieve spatial resolution through patterned sensing of the scene using a digital micromirror device (DMD) array. Our depth map reconstruction uses parametric signal modeling to recover the set of distinct depth ranges present in the scene. Then, using a convex program that exploits the sparsity of the Laplacian of the depth map, we recover the spatial content at the estimated depth ranges. In our experiments we acquired 64×64-pixel depth maps of fronto-parallel scenes at ranges up to 2.1 M using a pulsed laser, a DMD array and a single photon-counting detector. We also demonstrated imaging in the presence of unknown partially-transmissive occluders. The prototype and results provide promising directions for non-scanning, low-complexity range acquisition devices for various computer vision applications.


IEEE Transactions on Computational Imaging | 2015

Photon-Efficient Computational 3-D and Reflectivity Imaging With Single-Photon Detectors

Dongeek Shin; Ahmed Kirmani; Vivek K. Goyal; Jeffrey H. Shapiro

Capturing depth and reflectivity images at low light levels from active illumination of a scene has wide-ranging applications. Conventionally, even with detectors sensitive to individual photons, hundreds of photon detections are needed at each pixel to mitigate Poisson noise. We develop a robust method for estimating depth and reflectivity using fixed dwell time per pixel and on the order of one detected photon per pixel averaged over the scene. Our computational image formation method combines physically accurate single-photon counting statistics with exploitation of the spatial correlations present in real-world reflectivity and 3-D structure. Experiments conducted in the presence of strong background light demonstrate that our method is able to accurately recover scene depth and reflectivity, while traditional imaging methods based on maximum likelihood (ML) estimation or approximations thereof lead to noisier images. For depth, performance compares favorably to signal-independent noise removal algorithms such as median filtering or block-matching and 3-D filtering (BM3D) applied to the pixelwise ML estimate; for reflectivity, performance is similar to signal-dependent noise removal algorithms such as Poisson nonlocal sparse PCA and BM3D with variance-stabilizing transformation. Our framework increases photon efficiency 100-fold over traditional processing and also improves, somewhat, upon first-photon imaging under a total acquisition time constraint in raster-scanned operation. Thus, our new imager will be useful for rapid, low-power, and noise-tolerant active optical imaging, and its fixed dwell time will facilitate parallelization through use of a detector array.


international conference on acoustics, speech, and signal processing | 2012

CoDAC: A compressive depth acquisition camera framework

Ahmed Kirmani; Andrea Colaço; Franco N. C. Wong; Vivek K Goyal

Light detection and ranging (LIDAR) systems use time of flight (TOF) in combination with raster scanning of the scene to form depth maps, and TOF cameras instead make TOF measurements in parallel by using an array of sensors. Here we present a framework for depth map acquisition using neither raster scanning by the illumination source nor an array of sensors. Our architecture uses a spatial light modulator (SLM) to spatially pattern a temporally-modulated light source. Then, measurements from a single omnidirectional sensor provide adequate information for depth map estimation at a resolution equal that of the SLM. Proof-of-concept experiments have verified the validity of our modeling and algorithms.


International Journal of Computer Vision | 2011

Looking Around the Corner using Ultrafast Transient Imaging

Ahmed Kirmani; Tyler Hutchison; James Davis; Ramesh Raskar

We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.


international conference on image processing | 2014

Computational 3D and reflectivity imaging with high photon efficiency

Dongeek Shin; Ahmed Kirmani; Vivek K. Goyal; Jeffrey H. Shapiro

Capturing depth and reflectivity images at low light levels from active illumination of a scene has wide-ranging applications. Conventionally, even with single-photon detectors, hundreds of photon detections are needed at each pixel to mitigate Poisson noise. We introduce a robust method for estimating depth and reflectivity using on the order of 1 detected photon per pixel averaged over the scene. Our computational imager combines physically accurate single-photon counting statistics with exploitation of the spatial correlations present in real-world reflectivity and 3D structure. Experiments conducted in the presence of strong background light demonstrate that our computational imager is able to accurately recover scene depth and reflectivity, while traditional maximum likelihood-based imaging methods lead to estimates that are highly noisy. Our framework increases photon efficiency 100-fold over traditional processing and thus will be useful for rapid, low-power, and noise-tolerant active optical imaging.

Collaboration


Dive into the Ahmed Kirmani's collaboration.

Top Co-Authors

Avatar

Vivek K Goyal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrea Colaço

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongeek Shin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Franco N. C. Wong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey H. Shapiro

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dheera Venkatraman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James Davis

University of California

View shared research outputs
Top Co-Authors

Avatar

Haris Jeelani

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tyler Hutchison

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge