Thierry Oggier
Carl Zeiss AG
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thierry Oggier.
Optical Design and Engineering | 2004
Thierry Oggier; Michael Lehmann; Rolf Kaufmann; Matthias Schweizer; Michael Richter; Peter Metzler; Graham K. Lang; Felix Lustenberger; Nicolas Blanc
A new miniaturized camera system that is capable of 3-dimensional imaging in real-time is presented. The compact imaging device is able to entirely capture its environment in all three spatial dimensions. It reliably and simultaneously delivers intensity data as well as range information on the objects and persons in the scene. The depth measurement is based on the time-of-flight (TOF) principle. A custom solid-state image sensor allows the parallel measurement of the phase, offset and amplitude of a radio frequency (RF) modulated light field that is emitted by the system and reflected back by the camera surroundings without requiring any mechanical scanning parts. In this paper, the theoretical background of the implemented TOF principle is presented, together with the technological requirements and detailed practical implementation issues of such a distance measuring system. Furthermore, the schematic overview of the complete 3D-camera system is provided. The experimental test results are presented and discussed. The present camera system can achieve sub-centimeter depth resolution for a wide range of operating conditions. A miniaturized version of such a 3D-solid-state camera, the SwissRanger 2, is presented as an example, illustrating the possibility of manufacturing compact, robust and cost effective ranging camera products for 3D imaging in real-time.
electronic imaging | 2005
Thierry Oggier; Rolf Kaufmann; Michael Lehmann; Bernhard Büttgen; Simon Neukom; Michael Richter; Matthias Schweizer; Peter Metzler; Felix Lustenberger; Nicolas Blanc
The time-of-flight (TOF) principle is a well known principle to acquire a scene in all three dimensions. The advantages of the knowledge of the third dimension are obvious for many kinds of applications. The distance information within the scene renders automatic systems more robust and much less complex or even enables completely new solutions. A solid-state image sensor containing 124 x 160 pixels and the corresponding 3D-camera, the so-called SwissRanger camera, has already been presented in detail in [1]. It has been shown that the SwissRanger camera achieves depth resolutions in the sub-centimeter range, corresponding to a measured time resolution of a few tens of picoseconds with respect to the speed of light (c~3•108 m/s). However, one main drawback of these so-called lock-in TOF pixels is their limited capacity to handle background illumination. Keeping in mind that in outdoor applications the optical power on the sensor originating from background illumination (e.g., sun light) may be up to a few 100 times higher than the power of the modulated illumination, the sensor requires new pixel structures eliminating or at least reducing the currently experienced restrictions in terms of background illumination. Based on a 0.6 µm CMOS/CCD technology, four new pixel architectures suppressing background illumination and/or improving the ratio of modulated signal to background signal at the pixel-output level were developed and will be presented in this paper. The theoretical principle of operation and the expected performance are described in detail, together with a sketch of the implementation of the different pixel designs at silicon level. Furthermore, test results obtained in a laboratory environment are published. The sensor structures are characterized in a high background-light environment with up to sun light conditions. The distance linearity over a range of a few meters with the mentioned light conditions is measured. At the same time, the distance resolution is plotted as a function of the target distance, the integration time and the background illumination power. This in-depth evaluation leads to a comparison of the various background suppression approaches; it also includes a comparison with the traditional pixel structure in order to highlight the benefits of the new approaches. The paper concludes by providing parameter estimations which enables the outlook to build a sensor with a high lateral resolution containing the most promising pixel.
Journal of Biomedical Optics | 2006
Alessandro Esposito; Hans C. Gerritsen; Thierry Oggier; Felix Lustenberger; Fred S. Wouters
Fluorescence lifetime imaging microscopy (FLIM) allows the investigation of the physicochemical environment of fluorochromes and protein-protein interaction mapping by Forster resonance energy transfer (FRET) in living cells. However, simpler and cheaper solutions are required before this powerful analytical technique finds a broader application in the life sciences. Wide-field frequency-domain FLIM represents a solution whose application is currently limited by the need for multichannel-plate image intensifiers. We recently showed the feasibility of using a charge-coupled device/complementory metal-oxide semiconductor (CCD/CMOS) hybrid lock-in imager, originally developed for 3-D vision, as an add-on device for lifetime measurements on existing wide-field microscopes. In the present work, the performance of the setup is validated by comparison with well-established wide-field frequency-domain FLIM measurements. Furthermore, we combine the lock-in imager with solid-state light sources. This results in a simple, inexpensive, and compact FLIM system, operating at a video rate and capable of single-shot acquisition by virtue of the unique parallel retrieval of two phase-dependent images. This novel FLIM setup is used for cellular and FRET imaging, and for high-throughput and fast imaging applications. The all-solid-state design bridges the technological gap that limits the use of FLIM in areas such as drug discovery and medical diagnostics.
perception and interactive technologies | 2006
Thierry Oggier; Felix Lustenberger; Nicolas Blanc
In the past, measuring the scene in all three dimensions has been either very expensive, slow or extremely computationally intensive. The latest progresses in the field of microtechnologies enable the breakthrough for time-of-flight (TOF) based distance-measuring devices. This paper describes the basic principle of the TOF measurements and a first specific implementation in a state-of-the-art 3D-camera ”SwissRanger SR-3000” [T. Oggier et al., 2005]. Acquired image sequences will be presented as well.
Photonics Europe | 2004
Rolf Kaufmann; Michael Lehmann; Matthias Schweizer; Michael Richter; Peter Metzler; Graham K. Lang; Thierry Oggier; Nicolas Blanc; Peter Seitz; Gabriel Gruener; Urs Zbinden
A new miniaturised 256 pixel silicon line sensor, which allows for the acquisition of depth-resolved images in real-time, is presented. It reliably and simultaneously delivers intensity data as well as distance information on the objects in the scene. The depth measurement is based on the time-of-flight (TOF) principle. The device allows the simultaneous measurement of the phase, offset and amplitude of a radio frequency modulated light field that is emitted by the system and reflected back by the camera surroundings, without requiring any mechanical scanning parts. The 3D line sensor will be used on a mobile robot platform to substitute the laser range scanners traditionally used for navigation in dynamic and/or unknown environments.
electronic imaging | 2006
Bernhard Büttgen; Thierry Oggier; Michael Lehmann; Rolf Kaufmann; Simon Neukom; Michael Richter; Matthias Schweizer; David Beyeler; Roger Cook; Christiane Gimkiewicz; Claus Urban; Peter Metzler; Peter Seitz; Felix Lustenberger
Optical time-of-flight (TOF) distance measurements can be performed using so-called smart lock-in pixels. By sampling the optical signal 2, 4 or n times in each pixel synchronously with the modulation frequency, the phase between the emitted and reflected signal is extracted and the objects distance is determined. The high integration-level of such lock-in pixels enables the real-time acquisition of the three-dimensional environment without using any moving mechanical components. A novel design of the 2-tap lock-in pixel in a 0.6 μm semiconductor technology is presented. The pixel was implemented on a sensor with QCIF resolution. The optimized pixel design allows for high-speed operation of the device, resulting in a nearly-optimum demodulation performance and precise distance measurements which are almost exclusively limited by photon shot noise. In-pixel background-light suppression allows the sensor to be operated in an outdoor environment with sunlight incidence. The highly complex pixel functionality of the sensor was successfully demonstrated on the new SwissRanger SR3000 3D-TOF camera design. Distance resolutions in the millimeter range have been achieved while the camera is operating with frame rates of more than 20Hz.
Tm-technisches Messen | 2004
Peter Seitz; Thierry Oggier; Nicolas Blanc
Abstract Eine neue Art von optischen 3D-Halbleiter-Kameras wird beschrieben, welche auf dem Flugzeitprinzip beruhen und keine bewegten Teile benötigen. Dazu wird eine Lichtquelle (LED- oder Laserdioden-Array) mit einer Frequenz von 10–50 MHz moduliert, und das diffus zurückgestreute Licht wird mit einem kundenspezifischen Halbleiter-Bildsensor detektiert, der an jedem Bildpunkt nach dem Lock-in-Prinzip die Modulationsparameter Offset, Amplitude und Phasenlage bestimmen kann. Für höchste Empfindlichkeit wurde der Bildsensor in einer kombinierten CCD/CMOS-Technologie integriert. Dieser Bildsensor ist das Schlüsselelement für die miniaturisierte 3D-Kamera SwissRanger®, mit der 3D-Distanzbild-Sequenzen in Video-Echtzeit aufgenommen werden können. Die Distanzauflösung ist in weiten Funktionsbereichen nur noch durch das Quantenrauschen des einfallenden Lichtes bestimmt; unter optimalen Bedingungen wird eine Distanz-Auflösung von wenigen Millimetern bei Messdistanzen bis zu 10 m erreicht.
Archive | 2006
Thierry Oggier; Michael Lehmann; Bernhard Büttgen
Archive | 2008
Michael Lehmann; Bernhard Buettgen; Thierry Oggier
british machine vision conference | 2005
Huan Du; Thierry Oggier; Felix Lustenberger; Edoardo Charbon