Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. Jutzi is active.

Publication


Featured researches published by B. Jutzi.


Computers & Graphics | 2015

Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas

Martin Weinmann; Steffen Urban; Stefan Hinz; B. Jutzi; Clément Mallet

We propose a new methodology for large-scale urban 3D scene analysis in terms of automatically assigning 3D points the respective semantic labels. The methodology focuses on simplicity and reproducibility of the involved components as well as performance in terms of accuracy and computational efficiency. Exploiting a variety of low-level 2D and 3D geometric features, we further improve their distinctiveness by involving individual neighborhoods of optimal size. Due to the use of individual neighborhoods, the methodology is not tailored to a specific dataset, but in principle designed to process point clouds with a few millions of 3D points. Consequently, an extension has to be introduced for analyzing huge 3D point clouds with possibly billions of points for a whole city. For this purpose, we propose an extension which is based on an appropriate partitioning of the scene and thus allows a successive processing in a reasonable time without affecting the quality of the classification results. We demonstrate the performance of our methodology on two labeled benchmark datasets with respect to robustness, efficiency, and scalability. Graphical abstractWe propose a new methodology for large-scale urban 3D scene analysis which is based on distinctive 2D and 3D features derived from optimal neighborhoods.Display Omitted HighlightsWe present a new methodology for large-scale urban 3D point cloud classification.We analyze a strategy for recovering individual 3D neighborhoods of optimal size.Our methodology involves efficient feature extraction and classification.Our methodology contains an extension towards data-intensive processing.We evaluate our methodology on two recent, publicly available point cloud datasets.


Photogrammetric Engineering and Remote Sensing | 2010

Investigations on Surface Reflection Models for Intensity Normalization in Airborne Laser Scanning (ALS) Data

B. Jutzi; Hermann Gross

The analysis of laser scanner data is of great interest for gaining geospatial information. Especially for segmentation, classification, or visualization purposes, the intensity measured with a laser scanner device can be helpful. For automatic intensity normalization, various aspects are of concern, like beam divergence and atmospheric attenuation, both depending on the range. Additionally, the intensity is influenced by the incidence angle between beam propagation direction and Surface orientation. To gain the surface orientation, the eigenvectors of the covariance matrix for object points within a nearby environment are determined. After normalization the intensity does no longer depend on the incidence angle and is influenced by the material of the surface only. For surface reflection modeling, (a) the Lambertian, (b) the extended Lambertian, and (c) the Phong reflection model are introduced, to consider diffuse and specular backscattering characteristics of the surface. An airborne measurement campaign was carried out to investigate the influences of the incidence angle on the measured intensity. For investigations, 17 urban areas, such as traffic, building, and vegetation regions were studied and the derived improvements are depicted. The investigation shows that large intensity variation caused by the object surface orientation and the distance between sensor and object can be normalized by utilizing the standard Lambertian reflection model.


International Journal of Image and Data Fusion | 2014

Weighted data fusion for UAV-borne 3D mapping with camera and line laser scanner

B. Jutzi; Martin Weinmann; Jochen Meidow

Unmanned aerial vehicles (UAVs) equipped with adequate sensors have nowadays become a powerful tool for capturing spatial information. In this article, we introduce a concept for weighted data fusion in order to enable an improved UAV-borne 3D mapping with a camera and a lightweight line laser scanner. For this purpose, we carry out geometric camera calibration as well as lever-arm and bore-sight calibration and subsequently present a new methodology for incorporating camera images and laser scanner data into an adjustment process. This adjustment is based on the concept of variance components in order to obtain a reasonable weight ratio for data fusion and accurately estimate the poses of the sensors. We demonstrate the feasibility of the proposed approach and show that the consideration of range measurements clearly improves the pose estimation.


Image and signal processing for remote sensing VIII. Ed.: S.B. Serpico | 2003

Estimation and measurement of backscattered signals from pulsed laser radar

B. Jutzi; Bernd Eberle; Uwe Stilla

Current pulsed laser radar systems for ranging purposes are based on time-of-flight techniques. Nowadays first pulse as well as last pulse exploitation is used for different application, e.g. urban planning, forestry surveying. Besides this technique of time measurement the complete signal form over the time might be of interest, because it includes the backscattering characteristic of the illuminated field. This characteristic can be used for estimating the aspect angle of a plane with special surface property or estimating the surface property of a plane with a special aspect angle. In this paper a monostatic bi-directional experimental system with a fast digitizing receiver is described. The spatio-temporal beam propagation, the spatial reflectance of the surface, and receiver properties are modeled. A time dependent description of the received signal power is derived and our special surface property is considered. The spatial distribution of the used laser beam was measured and displayed by the beam profile. For a plane surface under various aspect angles the transversal distributions of the beam were simulated and measured. For these angles the corresponding temporal beam distributions were measured and compared with their pulse widths. The pulse spread is used to estimate the aspect angle of the illuminated object. The statistics for different angles was calculated. Different approaches which detect a characteristic time value were compared and evaluated. The consideration of the signal form allows a more precise determination of the time-of-flight. A 3-d visualization of equi-irradiance surfaces allows to access the spatio-temporal shape of the pulses.


Archive | 2015

Methoden zur automatischen Szenencharakterisierung basierend auf aktiven optischen Sensoren für die Photogrammetrie und Fernerkundung

B. Jutzi

In der Computer Vision, Photogrammetrie und Fernerkundung werden zunehmend aktive optische Sensoren zur Umgebungserfassung eingesetzt. Die technischen Ausfuhrungen der Sensoren sind vielfaltig, von strukturierten Lichtprojektion Sensoren uber Entfernungskameras bis hin zu Laserscannern. In der Habilitationsschrift werden sowohl sensorspezifische Eigenschaften angegangen, als auch Einblicke in die methodische Szenencharakterisierung gegeben und des Weiteren praxisnahe Anwendungen vorgestellt.


2003 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas | 2003

Analysis of laser pulses for gaining surface features of urban objects

B. Jutzi; Uwe Stilla

In this paper we describe investigations for a detailed analysis of laser pulses. Different techniques for measurement of time resolved laser pulses are presented. An experimental system for fast recording of signals was built up. For principal investigations a test board with urban materials was measured by single photon detection technique and visualized by a data cube. Based on this spatio-temporal data features are extracted to describe macro, meso, and micro structures. The pictures are depicted by gray value images. The limitation of an airborne system based on single photon detection is discussed.


Signal and data processing of small targets. Ed.: O.E. Drummond | 2001

Stereo vision for small targets in IR image sequences

B. Jutzi; Richard Gabler; Klaus Jaeger

Surveillance systems against missile attacks require the automatic detection of targets with low false alarm rate (FAR). Infrared Search and Track (IRST) systems offer a passive detection of threats at long ranges. For maximum reaction time and the arrangement of counter measurements, it is necessary to declare the objects as early as possible. For this purpose the detection and tracking algorithms have to deal with point objects. Conventional object features like shape, size and texture are usually unreliable for small objects. More reliable features of point objects are three-dimensional spatial position and velocity. At least two sensors observing the same scene are required for multi-ocular stereo vision. Mainly three steps are relevant for successful stereo image processing. First of all the precise camera calibration (estimating the intrinsic and extrinsic parameters) is necessary to satisfy the demand of high degree of accuracy, especially for long range targets. Secondly the correspondence problem for the detected objects must be solved. Thirdly the three-dimensional location of the potential target has to be determined by projective transformation. For an evaluation a measurement campaign to capture image data was carried out with real targets using two identical IR cameras and additionally synthetic IR image sequences have been generated and processed. In this paper a straightforward solution for stereo analysis based on stationary bin-ocular sensors is presented, the current results are shown suggestions for future work are given.


urban remote sensing joint event | 2013

Fast and accurate point cloud registration by exploiting inverse cumulative histograms (ICHs)

Martin Weinmann; B. Jutzi

The automatic and accurate alignment of captured point clouds is an important task for digitization, reconstruction and interpretation of 3D scenes. Standard approaches such as the ICP algorithm and Least Squares 3D Surface Matching require a good a priori alignment of the scans for obtaining satisfactory results. In this paper, we propose a new and fast methodology for automatic point cloud registration which does not require a good a priori alignment and is still able to recover the transformation parameters between two point clouds very accurately. The registration process is divided into coarse registration based on 3D/2D correspondences and fine registration exploiting 3D/3D correspondences. As the reliability of single 3D/2D correspondences is directly taken into account by applying Inverse Cumulative Histograms (ICHs), this approach is also capable to detect reliable tie points, even when using noisy raw point cloud data. The performance of the proposed methodology is demonstrated on a benchmark dataset and therefore allows for direct comparison with other already existing or future approaches.


urban remote sensing joint event | 2007

Simulation and analysis of full-waveform laser data of urban objects

B. Jutzi; Uwe Stilla

The analysis of data derived by full-waveform laser scanning systems is of great interest. In this study, we use a simulated surface response to estimate the slope of a plane surface by full-waveform analysis. For analysis the transmitted waveform of the emitted pulse is used to estimate the received waveform of the backscattered pulse for a known surface. We simulated a plane surface with different slopes. Typical spatial beam distributions are considered for modeling, namely Gaussian and uniform. The surface response is determined and the corresponding received waveform is calculated. The normalized cross-correlation function in between the estimated and the measured waveform is used to determine the slope of the surface. The similarity of the estimated and the measured received waveform is compared and discussed.


Journal of Imaging | 2017

LaFiDa—A Laserscanner Multi-Fisheye Camera Dataset

Steffen Urban; B. Jutzi

In this article, the Laserscanner Multi-Fisheye Camera Dataset (LaFiDa) for applying benchmarks is presented. A head-mounted multi-fisheye camera system combined with a mobile laserscanner was utilized to capture the benchmark datasets. Besides this, accurate six degrees of freedom (6 DoF) ground truth poses were obtained from a motion capture system with a sampling rate of 360 Hz. Multiple sequences were recorded in an indoor and outdoor environment, comprising different motion characteristics, lighting conditions, and scene dynamics. The provided sequences consist of images from three—by hardware trigger—fully synchronized fisheye cameras combined with a mobile laserscanner on the same platform. In total, six trajectories are provided. Each trajectory also comprises intrinsic and extrinsic calibration parameters and related measurements for all sensors. Furthermore, we generalize the most common toolbox for an extrinsic laserscanner to camera calibration to work with arbitrary central cameras, such as omnidirectional or fisheye projections. The benchmark dataset is available online released under the Creative Commons Attributions Licence (CC-BY 4.0), and it contains raw sensor data and specifications like timestamps, calibration, and evaluation scripts. The provided dataset can be used for multi-fisheye camera and/or laserscanner simultaneous localization and mapping (SLAM).

Collaboration


Dive into the B. Jutzi's collaboration.

Top Co-Authors

Avatar

Martin Weinmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Hinz

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Leitloff

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rosmarie Blomley

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

S. Wursthorn

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steffen Urban

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Uwe Weidner

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ana Djuricic

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge