Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Velten is active.

Publication


Featured researches published by Andreas Velten.


Nature Communications | 2012

Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging

Andreas Velten; Thomas H Willwacher; Otkrist Gupta; Ashok Veeraraghavan; Moungi G. Bawendi; Ramesh Raskar

The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.


Optics Express | 2015

Non-line-of-sight imaging using a time-gated single photon avalanche diode

Mauro Buttafava; Jessica Zeman; Alberto Tosi; Kevin W. Eliceiri; Andreas Velten

By using time-of-flight information encoded in multiply scattered light, it is possible to reconstruct images of objects hidden from the cameras direct line of sight. Here, we present a non-line-of-sight imaging system that uses a single-pixel, single-photon avalanche diode (SPAD) to collect time-of-flight information. Compared to earlier systems, this modification provides significant improvements in terms of power requirements, form factor, cost, and reconstruction time, while maintaining a comparable time resolution. The potential for further size and cost reduction of this technology make this system a good base for developing a practical system that can be used in real world applications.


International Journal of Computer Vision | 2014

Decomposing Global Light Transport Using Time of Flight Imaging

Di Wu; Andreas Velten; Matthew O'Toole; Belen Masia; Amit K. Agrawal; Qionghai Dai; Ramesh Raskar

Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using the high temporal resolution information of time of flight (ToF) images. With pulsed scene illumination, the time profile at each pixel of these images separates different illumination components by their finite travel time and encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for five computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, performing edge detection using ToF images and rendering novel images of the captured scene with adjusted amounts of subsurface scattering.


computer vision and pattern recognition | 2011

Estimating Motion and size of moving non-line-of-sight objects in cluttered environments

Rohit Pandharkar; Andreas Velten; Andrew Bardagjy; Everett Lawson; Moungi G. Bawendi; Ramesh Raskar

We present a technique for motion and size estimation of non-line-of-sight (NLOS) moving objects in cluttered environments using a time of flight camera and multipath analysis. We exploit relative times of arrival after reflection from a grid of points on a diffuse surface and create a virtual phased-array. By subtracting space-time impulse responses for successive frames, we separate responses of NLOS moving objects from those resulting from the cluttered environment. After reconstructing the line-of-sight scene geometry, we analyze the space of wavefronts using the phased array and solve a constrained least squares problem to recover the NLOS target location. Importantly, we can recover targets motion vector even in presence of uncalibrated time and pose bias common in time of flight systems. In addition, we compute the upper bound on the size of the target by backprojecting the extremas of the time profiles. Ability to track targets inside rooms despite opaque occluders and multipath responses has numerous applications in search and rescue, medicine and defense. We show centimeter accurate results by making appropriate modifications to a time of flight system.


Optical Engineering | 2014

Nonline-of-sight laser gated viewing of scattered photons

Martin Laurenzis; Andreas Velten

Abstract. Laser gated viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied for vision through fog, smoke, and other degraded environmental conditions as well as for the vision through sea water in submarine operation. A direct imaging of nonscattered photons (or ballistic photons) is limited in range and performance by the free optical path length, i.e., the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a nonline-of-sight imaging. The spatial and temporal distributions of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In particular, the information outside the line of sight or outside the visibility range is of high interest. We demonstrate nonline-of-sight imaging with a laser gated viewing system and different illumination concepts (point and surface scattering sources).


computer vision and pattern recognition | 2012

Decomposing global light transport using time of flight imaging

Di Wu; Matthew O'Toole; Andreas Velten; Amit K. Agrawal; Ramesh Raskar

Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using high temporal resolution information via time of flight (ToF) images. The time profile at each pixel encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for four computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, and performing edge detection using ToF images.


international conference on computer graphics and interactive techniques | 2011

Slow art with a trillion frames per second camera

Andreas Velten; Everett Lawson; Andrew Bardagjy; Moungi G. Bawendi; Ramesh Raskar

How will the world look with a one trillion frame per second camera? Although such a camera does not exist today, we converted high end research equipment to produce conventional movies at 0.5 trillion (5· 1011) frames per second, with light moving barely 0.6 mm in each frame. Our camera has the game changing ability to capture objects moving at the speed of light. Inspired by the classic high speed photography art of Harold Edgerton [Kayafas and Edgerton 1987] we use this camera to capture movies of several scenes.


international conference on computer graphics and interactive techniques | 2012

Relativistic ultrafast rendering using time-of-flight imaging

Andreas Velten; Di Wu; Adrian Jarabo; Belen Masia; Christopher Barsi; Everett Lawson; Chinmaya Joshi; Diego Gutierrez; Moungi G. Bawendi; Ramesh Raskar

We capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the low signal-to-noise ratio (SNR) associated with ultrafast exposures, as well as the absence of 2D cameras that operate at this time scale. We re-purpose modern imaging hardware to record an average of ultrafast repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes.


Communications of The ACM | 2016

Imaging the propagation of light through scenes at picosecond resolution

Andreas Velten; Di Wu; Belen Masia; Adrian Jarabo; Christopher Barsi; Chinmaya Joshi; Everett Lawson; Moungi G. Bawendi; Diego Gutierrez; Ramesh Raskar

We present a novel imaging technique, which we call femto-photography, to capture and visualize the propagation of light through table-top scenes with an effective exposure time of 1.85 ps per frame. This is equivalent to a resolution of about one half trillion frames per second; between frames, light travels approximately just 0.5 mm. Since cameras with such extreme shutter speed obviously do not exist, we first re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensors spatial dimensions. We then introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through the scenes. Given this fast resolution and the finite speed of light, we observe that the camera does not necessarily capture the events in the same order as they occur in reality: we thus introduce the notion of time-unwarping between the cameras and the worlds space--time coordinate systems, to take this into account. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique has already motivated new forms of computational photography, as well as novel algorithms for the analysis and synthesis of light transport.


Journal of The Optical Society of America A-optics Image Science and Vision | 2014

Estimating wide-angle, spatially varying reflectance using time-resolved inversion of backscattered light

Nikhil Naik; Christopher Barsi; Andreas Velten; Ramesh Raskar

Imaging through complex media is a well-known challenge, as scattering distorts a signal and invalidates imaging equations. For coherent imaging, the input field can be reconstructed using phase conjugation or knowledge of the complex transmission matrix. However, for incoherent light, wave interference methods are limited to small viewing angles. On the other hand, time-resolved methods do not rely on signal or object phase correlations, making them suitable for reconstructing wide-angle, larger-scale objects. Previously, a time-resolved technique was demonstrated for uniformly reflecting objects. Here, we generalize the technique to reconstruct the spatially varying reflectance of shapes hidden by angle-dependent diffuse layers. The technique is a noninvasive method of imaging three-dimensional objects without relying on coherence. For a given diffuser, ultrafast measurements are used in a convex optimization program to reconstruct a wide-angle, three-dimensional reflectance function. The method has potential use for biological imaging and material characterization.

Collaboration


Dive into the Andreas Velten's collaboration.

Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Schmitt-Sody

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Moungi G. Bawendi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher Barsi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ladan Arissian

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Di Wu

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar

Eric C. Breitbach

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge