Gustav Tolt
Swedish Defence Research Agency
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gustav Tolt.
international geoscience and remote sensing symposium | 2011
Gustav Tolt; Michal Shimoni; Jörgen Ahlberg
In this paper, a shadow detection method combining hyperspectral and LIDAR data analysis is presented. First, a rough shadow image is computed through line-of-sight analysis on a Digital Surface Model (DSM), using an estimate of the position of the sun at the time of image acquisition. Then, large shadow and non-shadow areas in that image are detected and used for training a supervised classifier (a Support Vector Machine, SVM) that classifies every pixel in the hyperspectral image as shadow or non-shadow. Finally, small holes are filled through image morphological analysis. The method was tested on data including a 24 band hyperspectral image in the VIS/NIR domain (50 cm spatial resolution) and a DSM of 25 cm resolution. The results were in good accordance with visual interpretation. As the line-of-sight analysis step is only used for training, geometric mismatches (about 2 m) between LIDAR and hyperspectral data did not affect the results significantly, nor did uncertainties regarding the position of the sun.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2011
Michal Shimoni; Gustav Tolt; Christiaan Perneel; Jörgen Ahlberg
This paper presents a new method to automatically detect occluded vehicle in semi or deep shadow areas using combined very high resolution (VHR) 3D LIDAR and hyperspectral data. The proposed shape/spectral integration (SSI) decision fusion algorithm was shown to outperform the spectral based anomaly algorithm mainly in deep shadow areas. The fusion of LIDAR DSM data with spectral data is useful in the detection of vehicles in semi and deep shadow areas. The utility of shape information was shown to be a way to enhance spectral target detection in complex urban scene.
international geoscience and remote sensing symposium | 2011
Michal Shimoni; Gustav Tolt; Christiaan Perneel; Jörgen Ahlberg
In an effort to overcome the limitations of small target detection in complex urban scene, complementary data sets are combined to provide additional insight about a particular scene. This paper presents a method based on shape/spectral integration (SSI) decision level fusion algorithm to improve the detection of vehicles in semi and deep shadow areas. A four steps process combines high resolution LIDAR and hyperspectral data to classify shadow areas, segment vehicles in LIDAR data, detect spectral anomalies and improves vehicle detection. The SSI decision level fusion algorithm was shown to outperform detection using a single data set and the utility of shape information was shown to be a way to enhance spectral target detection in complex urban scenes.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Gustav Tolt; Asa Persson; Jonas Landgård; Ulf Söderman
In this paper, a number of techniques for segmentation and classification of airborne laser scanner data are presented. First, a method for ground estimation is described, that is based on region growing starting from a set of ground seed points. In order to prevent misclassification of buildings and vegetation as ground, a number of non-ground regions are first extracted, in which seed points should be discarded. Then, a decision-level fusion approach for building detection is proposed, in which the outputs of different classifiers are combined in order to improve the final classification results. Finally, a technique for building reconstruction is briefly outlined. In addition to being a tool for creating 3D building models, it also serves as a final step in the building classification process since it excludes regions not belonging to any roof segment in the final building model.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Christina Grönwall; Tomas Chevalier; Gustav Tolt; Pierre Andersson
Laser-based 3D sensors measure range with high accuracy and allow for detection of several reflecting surfaces for each emitted laser pulse. This makes them particularly suitable for sensing objects behind various types of occlusion, e.g. camouflage nets and tree canopies. Nevertheless, automatic detection and recognition of targets in forested areas is a challenging research problem, especially since foreground objects often cause targets to appear as fragmented. In this paper we propose a sequential approach for detection and recognition of man-made objects in natural forest environments using data from laser-based 3D sensors. First, ground samples and samples too far above the ground (that cannot possibly originate from a target) are identified and removed from further processing. This step typically results in a dramatic data reduction. Possible target samples are then detected using a local flatness criterion, based on the assumption that targets are among the most structured objects in the remaining data. The set of samples is reduced further through shadow analysis, where any possible target locations are found by identifying regions that are occluded by foreground objects. Since we anticipate that targets appear as fragmented, the remaining samples are grouped into a set of larger segments, based on general target characteristics such as maximal dimensions and generic shape. Finally, the segments, each of which corresponds to a target hypothesis, undergo automatic target recognition in order to find the best match from a model library. The approach is evaluated in terms of ROC on real data from scenes in forested areas.
Optical Engineering | 2011
Christina Grönwall; Gustav Tolt; Tomas Chevalier; Håkan Larsson
A Bayesian approach for data reduction based on spatial filtering is proposed that enables detection of targets partly occluded by natural forest. The framework aims at creating a synergy between terrain mapping and target detection. It is demonstrates how spatial features can be extracted and combined in order to detect target samples in cluttered environments. In particular, it is illustrated how a priori scene information and assumptions about targets can be translated into algorithms for feature extraction. We also analyze the coupling between features and assumptions because it gives knowledge about which features are general enough to be useful in other environments and which are tailored for a specific situation. Two types of features are identified, nontarget indicators and target indicators. The filtering approach is based on a combination of several features. A theoretical framework for combining the features into a maximum likelihood classification scheme is presented. The approach is evaluated using data collected with a laser-based 3-D sensor in various forest environments with vehicles as targets. Over 70% of the target points are detected at a false-alarm rate of <1%. We also demonstrate how selecting different feature subsets influence the results.
Image and Signal Processing for Remote Sensing XVII, Prague, Czech Republic, 19–21 September 2011 | 2011
Ola Friman; Gustav Tolt; Jörgen Ahlberg
Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral imaging applications. These are challenging problems as the measured spectra in hyperspectral images from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface, e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different components of the incident light. These light components are subsequently used to predict what a measured spectrum would look like under different light conditions. The derived method is evaluated using an urban hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from LIDAR 3D data acquired simultaneously with the hyperspectral data.
Optical Engineering | 2016
Markus Henriksson; Håkan Larsson; Christina Grönwall; Gustav Tolt
Abstract. Time-correlated single-photon-counting (TCSPC) lidar provides very high resolution range measurements. This makes the technology interesting for three-dimensional imaging of complex scenes with targets behind foliage or other obscurations. TCSPC is a statistical method that demands integration of multiple measurements toward the same area to resolve objects at different distances within the instantaneous field-of-view. Point-by-point scanning will demand significant overhead for the movement, increasing the measurement time. Here, the effect of continuously scanning the scene row-by-row is investigated and signal processing methods to transform this into low-noise point clouds are described. The methods are illustrated using measurements of a characterization target and an oak and hazel copse. Steps between different surfaces of less than 5 cm in range are resolved as two surfaces.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Gustav Tolt; Anders Wiklund; Pierre Andersson; Tomas Chevalier; Christina Grönwall; Frank Gustafsson; Håkan Larsson
In this paper, we present techniques related to registration and change detection using 3D laser radar data. First, an experimental evaluation of a number of registration techniques based on the Iterative Closest Point algorithm is presented. As an extension, an approach for removing noisy points prior to the registration process by keypoint detection is also proposed. Since the success of accurate registration is typically dependent on a satisfactorily accurate starting estimate, coarse registration is an important functionality. We address this problem by proposing an approach for coarse 2D registration, which is based on detecting vertical structures (e.g. trees) in the point sets and then finding the transformation that gives the best alignment. Furthermore, a change detection approach based on voxelization of the registered data sets is presented. The 3D space is partitioned into a cell grid and a number of features for each cell are computed. Cells for which features have changed significantly (statistical outliers) then correspond to significant changes.
intelligent robots and systems | 2015
Fredrik Bissmarck; Martin Svensson; Gustav Tolt
A Next Best View estimate may guide processes of 3D reconstruction and exploration to completeness within reasonable time. For the evaluation to be useful, the Next Best View computation itself must be effective in terms of time and accuracy. It needs to be model-free to hold for any geometry of the 3D scene. In this work, we compare the effectiveness of different approaches to Next Best View evaluation. A 3D occupancy grid map, allowing for fast lookup and ray casting, serves as a foundation for our evaluation. We tested naive, state-of-the-art and novel algorithms on data acquired from both indoor and outdoor environments. We demonstrate that the most effective volumetric algorithm is a novel one that exploits spatial hierarchy, utilizes frontiers, and avoids redundant ray casting.