Gerd Häusler
University of Erlangen-Nuremberg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerd Häusler.
Applied Optics | 1992
Thomas Dresel; Gerd Häusler; Holger Venzke
We introduce a three-dimensional sensor designed primarily for rough objects that supplies an accuracy that is limited only by the roughness of the object surface. This differs from conventional optical systems in which the depth accuracy is limited by the aperture. Consequently, our sensor supplies high accuracy with a small aperture, i.e., we can probe narrow crevices and holes. The sensor is based on a Michelson interferometer, with the rough object surface serving as one mirror. The small coherence length of the light source is used. While scanning the object in depth, one can detect the local occurrence of interference within the speckles emerging from the object. We call this method coherence radar.
Applied Optics | 1994
Rainer Dorsch; Gerd Häusler; Jürgen Herrmann
We discuss the uncertainty limit in distance sensing by laser triangulation. The uncertainty in distance measurement of laser triangulation sensors and other coherent sensors is limited by speckle noise. Speckle arises because of the coherent illumination in combination with rough surfaces. A minimum limit on the distance uncertainty is derived through speckle statistics. This uncertainty is a function of wavelength, observation aperture, and speckle contrast in the spot image. Surprisingly, it is the same distance uncertainty that we obtained from a single-photon experiment and from Heisenbergs uncertainty principle. Experiments confirm the theory. An uncertainty principle connecting lateral resolution and distance uncertainty is introduced. Design criteria for a sensor with minimum distanc uncertainty are determined: small temporal coherence, small spatial coherence, a large observation aperture.
Optics Letters | 1996
Gerd Häusler; J. M. Herrmann; R. Kummer; M. W. Lindner
An optical method is introduced for observation of temporally and spatially resolved frames that show how light propagates in diffusely scattering materials. The method permits videos with 100-fs resolution in time to be produced. The method utilizes short-coherence interferometry. The source of information is the speckle contrast. The temporal and spatial evolution of the multiple scattering process is demonstrated for several biological and industrial samples. A major objective of the method is to investigate the conditions for optimum coherence and optimum apertures to achieve high resolution in the short-coherence interferometry. One important result is that during the propagation a sharp photon horizon evolves, which is useful for the morphological analysis of volume scatterers.
Applied Optics | 1988
Gerd Häusler; Werner Heckel
We report a method for 3-D sensing by light sectioning. The specific goal is to demonstrate that high resolution and large depth can be achieved simultaneously, thus overcoming the major limitation of conventional light sectioning. We use the diffraction pattern of an axicon to generate a light knife with large depth of field (for example, 1700 mm) and high lateral resolution (for example, 55 microm). Illuminating an object with this light knife creates a profile on the object. We detect this profile with a CCD TV camera and evaluate the centroid of the profile by means of an interpolation algorithm.
Applied Optics | 2008
Svenja Ettl; Jürgen Kaminski; Markus C. Knauer; Gerd Häusler
We present a generalized method for reconstructing the shape of an object from measured gradient data. A certain class of optical sensors does not measure the shape of an object but rather its local slope. These sensors display several advantages, including high information efficiency, sensitivity, and robustness. For many applications, however, it is necessary to acquire the shape, which must be calculated from the slopes by numerical integration. Existing integration techniques show drawbacks that render them unusable in many cases. Our method is based on an approximation employing radial basis functions. It can be applied to irregularly sampled, noisy, and incomplete data, and it reconstructs surfaces both locally and globally with high accuracy.
Journal of Orofacial Orthopedics-fortschritte Der Kieferorthopadie | 2007
Jutta Hartmann; Philipp Meyer-Marcotty; Michaela Benz; Gerd Häusler; Angelika Stellzig-Eisenhauer
Objective:The objective of this study was to analyze the reliability of a landmark-independent method for determining the facial symmetry plane and degree of asymmetry based on three-dimensional data from the facial surface from two sets of recordings, one performed consecutively and one performed on different days.Materials and Methods:We used an optical 3D-sensor to obtain the facial data of one male subject in two sets of ten measurements: the first taken consecutively and the second on different days. The symmetry plane and degree of asymmetry were calculated for each of the resulting twenty sets of data. One set of data was analyzed ten times for control purposes. The calculation of the mean deviation angle between the symmetry planes served as a measure of the reproducibility of these results.Results:Although the mean angular deviations of the computed symmetry planes, 0.134° (for ten consecutively captured images) and 0.177° (for the ten images captured on different days), were each significantly higher than the mean angular deviation (0.028°) calculated from ten analyses of a single image, they can still be regarded as very small. There were no significant differences in the degree of asymmetry among the three measurement sets. The standard deviations revealed low values.Conclusions:This method can be used to compute with high reliability the symmetry planes and degree of asymmetry of facial 3D-data. The color-coded visualization of asymmetrical facial regions makes it possible for this analytical procedure to capture the asymmetries of facial soft tissue with substantially greater precision than 2-dimensional en face images.ZusammenfassungZielsetzung:Ziel der vorliegenden Untersuchung war es, die Reliabilität einer landmarkenunabhängigen Methode zur Berechnung der Gesichtssymmetrieebene und des Asymmetriegrades bei aufeinanderfolgenden beziehungsweise an verschiedenen Tagen durchgeführten 3-D-Aufnahmen zu analysieren.Material und Methodik:Die 3-D-Gesichtsdaten eines männlichen Probanden wurden mit einem optischen Sensor erfasst. Es wurden zehn Aufnahmen direkt nacheinander sowie an verschiedenen Tagen durchgeführt, für welche jeweils die Symmetrieebene und der Asymmetriegrad berechnet wurden. Zusätzlich wurde eine Aufnahme zehnmal ausgewertet. Die Berechnung der mittleren Winkelabweichung zwischen den Symmetrieebenen diente als Maß für die Reproduzierbarkeit der Methode.Ergebnisse:Die mittlere Winkelabweichung der berechneten Symmetrieebenen war mit 0,134° (zehn Aufnahmen nacheinander) und 0,177° (zehn Aufnahmen an verschiedenen Tagen) zwar signifikant höher als bei zehnmaliger Auswertung derselben Aufnahme (0,028°), ist jedoch als sehr gering zu bewerten. Bezüglich des Asymmetriegrades gab es keine statistisch signifikanten Unterschiede zwischen den drei Auswerteserien, die jeweiligen Standardabweichungen waren gering.Schlussfolgerungen:Mit Hilfe der vorgestellten Methode können reproduzierbar die Symmetrieebene sowie der Asymmetriegrad anhand von 3-D-Gesichtsdaten bestimmt werden. Durch die farbkodierte Visualisierung von asymmetrischen Gesichtsbereichen ermöglicht dieses Analyseverfahren eine wesentlich präzisere Diagnostik von Asymmetrien der Gesichtsweichteile als zweidimensionale En-face-Aufnahmen.
Plastic and Reconstructive Surgery | 2003
Emeka Nkenke; Astrid Langer; Xavier Laboureux; Michaela Benz; Tobias Maier; Manuel Kramer; Gerd Häusler; Peter Kessler; Jörg Wiltfang; Friedrich Wilhelm Neukam
&NA; The purpose of this study was to validate the assessment of visible volume changes of the facial soft tissue with an optical three‐dimensional sensor and to introduce new parameters for the evaluation of the soft‐tissue shape achieved from three‐dimensional data of selected cases of midfacial distraction. Images of a truncated cone of known volume were assessed repeatedly with an optical three‐dimensional sensor based on phase‐measuring triangulation to calculate the volume. Two cubic centimeters of anesthetic solution was injected into the right malar region of 10 volunteers who gave their informed consent. Three‐dimensional images were assessed before and immediately after the injections for the assessment of the visible volume change. In five patients who underwent midfacial distraction after a high quadrangular Le Fort I osteotomy, three‐dimensional scans were acquired before and 6 and 24 months after the operation. The visible soft‐tissue volume change in the malar‐midfacial area and the mean distance of the accommodation vector that transformed the preoperative into the postoperative surface were calculated. The volume of the truncated cone was 235.26 ± 1.01 cc, revealing a measurement uncertainty of 0.4 percent. The injections of anesthetic solution into the malar area resulted in an average visible volume change of 2.06 ± 0.06 cc. The measurement uncertainty was 3 percent. In the five patients, the average distance of maxillary advancement was 6.7 ± 2.3 mm after 6 months and 5.4 ± 3.0 mm after 2 years. It was accompanied by a mean visible volume increase of 8.92 ± 5.95 cc on the right side and 9.54 ± 4.39 cc on the left side after 6 months and 3.54 ± 3.70 cc and 4.80 ± 3.47 cc, respectively, after 2 years. The mean distance of the accommodation vector was 4.41 ± 1.94 mm on the right side and 4.74 ± 1.32 mm on the left side after 6 months and 1.62 ± 1.96 mm and 2.16 ± 1.52 mm, respectively, after 2 years. The assessment of visible volume changes by optical three‐dimensional images can be carried out with considerable accuracy. The determination of volume changes and accompanying accommodation vectors completes the cephalometric analysis during the follow‐up of patients undergoing midfacial distraction. The new parameters will help to assess normative soft‐tissue data on the basis of three‐dimensional imaging with a view to an improved three‐dimensional prediction of the operative outcome of orthognathic surgery. (Plast. Reconstr. Surg. 112: 367, 2003.)
Applied Optics | 1993
Gerd Häusler; Dieter Ritter
We discuss a three-dimensional sensor that combines coded illumination and triangulation. The sensor supplies the distance of ~ 250,000 object pixels (TV format) in 40 ms (one single TV frame period). Themethod is based on the following principle: a color spectrum of a white-light source is imaged onto the object by illumination from one certain direction. The object is observed by a color TV camera from a direction of observation, which is different from the direction of illumination. The color (hue) of each pixel is a measure of its distance from a reference plane. It can be evaluated by the three (red-green-blue) output channels of the CCD camera. This evaluation can be implemented within TV real time. Even colored objects can be measured. The resolution achieved is 50-150 depth steps.
Applied Optics | 1988
Gerd Häusler; Gerhard Schneider
We demonstrate a method for testing optics (spherical and aspheric) and other reflecting or transmitting objects. We call this experimental ray tracing. A laser beam is sent through the sample, and its propagation is determined with a lateral effect photodiode. A modified Hartmann test can be performed by measurement of beam location within two planes. Measurement in one plane close to the focus delivers a spot diagram. The method is well suited for testing even strong aspheric optics. As a further use we demonstrate 3-D shape measurement of nonplanar glass plates, e.g., car windshields.
Applied Optics | 1988
K. Engelhardt; Gerd Häusler
A range sensing technique is demonstrated that finds the 3-D shape of diffusely reflecting objects. The technique works sequentially in depth direction and is based on structured illumination and focus sensing. A TV camera and analog electronics are used to find the locations in focus of each step of a focus series in TV real time. The depth resolution is not very high, however, the technique is simple, rapid, and well suited to get an overview of a scene in robot vision.