Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wilfried Karel is active.

Publication


Featured researches published by Wilfried Karel.


Computers, Environment and Urban Systems | 2014

OPALS – A framework for Airborne Laser Scanning data analysis

Norbert Pfeifer; Gottfried Mandlburger; Johannes Otepka; Wilfried Karel

Abstract A framework for Orientation and Processing of Airborne Laser Scanning point clouds, OPALS, is presented. It is designed to provide tools for all steps starting from full waveform decomposition, sensor calibration, quality control, and terrain model derivation, to vegetation and building modeling. The design rationales are discussed. The structure of the software framework enables the automatic and simultaneous building of command line executables, Python modules, and C++ classes from a single algorithm-centric repository. It makes extensive use of (industry-) standards as well as cross-platform libraries. The framework provides data handling, logging, and error handling. Random, high-performance run-time access to the originally acquired point cloud is provided by the OPALS data manager, allowing storage of billions of 3D-points and their additional attributes. As an example geo-referencing of laser scanning strips is presented.


Good practice in archaeological diagnostics : non-invasive survey of complex archaeological sites | 2013

Undistorting the Past: New Techniques for Orthorectification of Archaeological Aerial Frame Imagery

Geert Verhoeven; Christopher Sevara; Wilfried Karel; Camillo Ressl; Michael Doneus; Christian Briese

Archaeologists using airborne data can encounter a large variety of frame images in the course of their work. These range from vertical aerial photographs acquired with very expensive calibrated optics to oblique images from hand-held, uncalibrated cameras and even photographs shot with compact cameras from an array of unmanned airborne solutions. Additionally, imagery can be recorded in one or more spectral bands of the complete optical electromagnetic spectrum. However, these aerial images are rather useless from an archaeological standpoint as long as they are not interpreted in detail. Furthermore, the relevant archaeological information interpreted from these images has to be mapped and compared with information from other sources. To this end, the imagery must be accurately georeferenced, and the many geometrical distortions induced by the optics, the terrain and the camera tilt should be corrected. This chapter focuses on several types of archaeological airborne frame imagery, the distortion factors that are influencing these two-dimensional still images and the necessary steps to compute orthophotographs from them. Rather than detailing the conventional photogrammetric orthorectification workflows, this chapter mainly centres on the use of computer vision-based solutions such as structure from motion (SfM) and dense multi-view stereo (MVS). In addition to a theoretical underpinning of the working principles and algorithmic steps included in both SfM and MVS, real-world imagery originating from traditional and more advanced airborne imaging platforms will be used to illustrate the possibilities of such a computer vision-based approach: the variety of imagery that can be dealt with, how (accurately) these images can be transformed into map-like orthophotographs and how these results can aid in the documentation of archaeological resources at a variety of spatial scales. Moreover, the case studies detailed in this chapter will also prove that this approach might move beyond current restrictions of conventional photogrammetry due to its applicability to datasets that were previously thought to be unsuitable for convenient georeferencing.


Proceedings of SPIE | 2009

Range camera calibration based on image sequences and dense comprehensive error statistics

Wilfried Karel; Norbert Pfeifer

This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically tracked throughout the amplitude layer of long image sequences. These image observations are introduced into a bundle block adjustment, which on the one hand results in the determination of the interior orientation. Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a 5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the purpose of system identification, whereupon its elements are introduced into another least-squares adjustment, finally leading to clear range correction models and parameters.


Remote Sensing | 2016

Automated Archiving of Archaeological Aerial Images

Michael Doneus; Martin Wieser; Geert Verhoeven; Wilfried Karel; Martin Fera; Norbert Pfeifer

The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content) or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique) aerial images (by a simple planar rectification using the exterior orientation parameters) and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46) and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94). This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery.


In: TOF Range-Imaging Cameras. (pp. 117-138). (2013) | 2013

3D Cameras: Errors, Calibration and Orientation

Nobert Pfeifer; Derek D. Lichti; Jan Böhm; Wilfried Karel

Range cameras integrate different optical measurement techniques that provide coverage of an area. Firstly, they provide images like frame cameras which provide scene information in the form of texture. Secondly, they provide direct range measurement like laser scanners, but do so simultaneously for the entire field of view as opposed to the sequential operation of laser scanners. Above all, they provide image streams like video cameras. For specific measurement and modeling tasks, one observation technology, i.e. either passive imaging or laser scanning, is typically more suited than the other one. The integrated aspects of 3D cameras have therefore triggered a lot of interest despite their relative low resolution and accuracy. The potential to obtain 3D scene information instantly and directly thus caused different groups to investigate the data quality of 3D range cameras, models for calibration and orientation.


Proceedings of SPIE | 2009

Range calibration for terrestrial laser scanners and range cameras

Norbert Pfeifer; Camillo Ressl; Wilfried Karel

Range cameras and terrestrial laser scanners provide 3D geometric information by directly measuring the range from the sensor to the object. Calibration of the ranging component has not been studied systematically yet, and this paper provides a first overview. The proposed approaches differ in the object space features used for calibration, the calibration models themselves, and possibly required environmental conditions. A number of approaches are reviewed within this framework and discussed. For terrestrial laser scanners, improvement in accuracy by a factor up to two is typical, whereas range camera calibration still lacks a proper model, and large systematic errors typically remain.


International Journal of Heritage in the Digital Era | 2014

Cost-effective geocoding with exterior orientation for airborne and terrestrial archaeological photography - possibilities and limitations

Martin Wieser; Geert Verhoeven; Christian Briese; Michael Doneus; Wilfried Karel; Norbert Pfeifer

Taking a photograph is often considered to be an indispensable procedural step in many archaeological fields (e.g. excavating), whereas some sub-disciplines (e.g. aerial archaeology) often consider photographs to be the prime data source. Whether they were acquired on the ground or from the air, digital cameras save with each photograph the exact date and time of acquisition and additionally enable to store the cameras geographical location in specific metadata fields. This location is typically obtained from GNSS (Global Navigation Satellite System) receivers, either operating in continuous mode to record the path of the camera platform, or the position is observed for each exposure individually. Although such positional information has huge advantages in archiving the imagery, this approach has several limits as it does not record the complete exterior orientation of the camera. More specifically, the essential roll, pitch and yaw camera angles are missing, thus the viewing direction and the camera rot...


Remote Sensing | 2018

Impact of the Acquisition Geometry of Very High-Resolution Pléiades Imagery on the Accuracy of Canopy Height Models over Forested Alpine Regions

Livia Piermattei; Mauro Marty; Wilfried Karel; Camillo Ressl; Markus Hollaus; Christian Ginzler; Norbert Pfeifer

This work focuses on the accuracy estimation of canopy height models (CHMs) derived from image matching of Pléiades stereo imagery over forested mountain areas. To determine the height above ground and hence canopy height in forest areas, we use normalised digital surface models (nDSMs), computed as the differences between external high-resolution digital terrain models (DTMs) and digital surface models (DSMs) from Pléiades image matching. With the overall goal of testing the operational feasibility of Pléiades images for forest monitoring over mountain areas, two questions guide this work whose answers can help in identifying the optimal acquisition planning to derive CHMs. Specifically, we want to assess (1) the benefit of using tri-stereo images instead of stereo pairs, and (2) the impact of different viewing angles and topography. To answer the first question, we acquired new Pléiades data over a study site in Canton Ticino (Switzerland), and we compare the accuracies of CHMs from Pléiades tri-stereo and from each stereo pair combination. We perform the investigation on different viewing angles over a study area near Ljubljana (Slovenia), where three stereo pairs were acquired at one-day offsets. We focus the analyses on open stable and on tree covered areas. To evaluate the accuracy of Pléiades CHMs, we use CHMs from aerial image matching and airborne laser scanning as reference for the Ticino and Ljubljana study areas, respectively. For the two study areas, the statistics of the nDSMs in stable areas show median values close to the expected value of zero. The smallest standard deviation based on the median of absolute differences (σMAD) was 0.80 m for the forward-backward image pair in Ticino and 0.29 m in Ljubljana for the stereo images with the smallest absolute across-track angle (−5.3◦). The differences between the highest accuracy Pléiades CHMs and their reference CHMs show a median of 0.02 m in Ticino with a σMAD of 1.90 m and in Ljubljana a median of 0.32 m with a σMAD of 3.79 m. The discrepancies between these results are most likely attributed to differences in forest structure, particularly tree height, density, and forest gaps. Furthermore, it should be taken into account that temporal vegetational changes between the Pléiades and reference data acquisitions introduce additional, spurious CHM differences. Overall, for narrow forward–backward angle of convergence (12◦) and based on the used software and workflow to generate the nDSMs from Pléiades images, the results show that the differences between tri-stereo and stereo matching are rather small in terms of accuracy and completeness of the CHM/nDSMs. Therefore, a small angle of convergence does not constitute a major limiting factor. More relevant is the impact of a large across-track angle (19◦), which considerably reduces the quality of Pléiades CHMs/nDSMs. Remote Sens. 2018, 10, 1542; doi:10.3390/rs10101542 www.mdpi.com/journal/remotesensing Remote Sens. 2018, 10, 1542 2 of 22


ISPRS international journal of geo-information | 2018

Roughness Spectra Derived from Multi-Scale LiDAR Point Clouds of a Gravel Surface: A Comparison and Sensitivity Analysis

Milutin Milenković; Camillo Ressl; Wilfried Karel; Gottfried Mandlburger; Norbert Pfeifer

The roughness spectrum (i.e., the power spectral density) is a derivative of digital terrain models (DTMs) that is used as a surface roughness descriptor in many geomorphological and physical models. Although light detection and ranging (LiDAR) has become one of the main data sources for DTM calculation, it is still unknown how roughness spectra are affected when calculated from different LiDAR point clouds, or when they are processed differently. In this paper, we used three different LiDAR point clouds of a 1 m × 10 m gravel plot to derive and analyze the roughness spectra from the interpolated DTMs. The LiDAR point clouds were acquired using terrestrial laser scanning (TLS), and laser scanning from both an unmanned aerial vehicle (ULS) and an airplane (ALS). The corresponding roughness spectra are derived first as ensemble averaged periodograms and then the spectral differences are analyzed with a dB threshold that is based on the 95% confidence intervals of the periodograms. The aim is to determine scales (spatial wavelengths) over which the analyzed spectra can be used interchangeably. The results show that one TLS scan can measure the roughness spectra for wavelengths larger than 1 cm (i.e., two times its footprint size) and up to 10 m, with spectral differences less than 0.65 dB. For the same dB threshold, the ULS and TLS spectra can be used interchangeably for wavelengths larger than about 1.2 dm (i.e., five times the ULS footprint size). However, the interpolation parameters should be optimized to make the ULS spectrum more accurate at wavelengths smaller than 1 m. The plot size was, however, too small to draw particular conclusions about ALS spectra. These results show that novel ULS data has a high potential to replace TLS for roughness spectrum calculation in many applications.


Isprs Journal of Photogrammetry and Remote Sensing | 2006

Accuracy of large-scale canopy heights derived from LiDAR data under operational constraints in a complex alpine environment

Markus Hollaus; W. Wagner; C. Eberhöfer; Wilfried Karel

Collaboration


Dive into the Wilfried Karel's collaboration.

Top Co-Authors

Avatar

Norbert Pfeifer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Camillo Ressl

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Briese

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gottfried Mandlburger

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Hollaus

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Wieser

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Johannes Otepka

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Milutin Milenković

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge