Wolfgang Krüger
Fraunhofer Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wolfgang Krüger.
advanced video and signal based surveillance | 2012
Michael Teutsch; Wolfgang Krüger
Automatic processing of videos coming from small UAVs offers high potential for advanced surveillance applications but is also very challenging. These challenges include camera motion, high object distance, varying object background, multiple objects near to each other, weak signal-to-noise-ratio (SNR), or compression artifacts. In this paper, a video processing chain for detection, segmentation, and tracking of multiple moving objects is presented dealing with the mentioned challenges. The fundament is the detection of local image features, which are not stationary. By clustering these features and subsequent object segmentation, regions are generated representing object hypotheses. Multi-object tracking is introduced using a Kalman filter and considering the camera motion. Split or merged object regions are handled by fusion of the regions and the local features. Finally, a quantitative evaluation of object segmentation and tracking is provided.
2010 International WaterSide Security Conference | 2010
Michael Teutsch; Wolfgang Krüger
Autonomous round-the-clock observation of wide critical maritime areas can be a powerful support for border protection agencies to avoid criminal acts like illegal immigration, piracy or drug trafficking. These criminal acts are often accomplished by using small boats to decrease the probability of being uncovered. In this paper, we present an image exploitation approach to detect and classify maritime objects in infrared image sequences recorded from an autonomous platform. We focus on high robustness and generality with respect to variations of boat appearance, image quality, and environmental condition. A fusion of three different detection algorithms is performed to create reliable alarm hypotheses. In the following, a set of well-investigated features is extracted from the alarm hypotheses and evaluated using a two-stage-classification with support vector machines (SVMs) in order to distinguish between three object classes: clutter, irrelevant objects and suspicious boats. On the given image data we achieve a rate of 97 % correct classifications.
2010 International WaterSide Security Conference | 2010
Wolfgang Krüger; Zigmund Orlov
In this paper we present a robust solution to detect and track small and distant boats from thermal images captured by a camera mounted on an autonomous platform (buoy or patrol vessel). It is characterized by a multiple-layer and multiple-algorithm architecture, which uses a combination of algorithms relying on complementary image cues to generate detections that are robust with respect to variations of boat appearance, image quality, and environmental conditions. The core component of the image exploitation is a detection layer which provides the results of several detection algorithms in a motion-stabilized scene coordinate frame aligned with the estimated horizon line. In the autonomous system, detections are used to trigger alarms and to facilitate multi-target tracking.
computer vision and pattern recognition | 2015
Michael Teutsch; Wolfgang Krüger
The detection of vehicles driving on busy urban streets in videos acquired by airborne cameras is challenging due to the large distance between camera and vehicles, simultaneous vehicle and camera motion, shadows, or low contrast due to weak illumination. However, it is an important processing step for applications such as automatic traffic monitoring, detection of abnormal behaviour, border protection, or surveillance of restricted areas. In contrast to commonly applied object segmentation methods based on background subtraction or frame differencing, we detect moving vehicles using the combination of a track-before-detect (TBD) approach and machine learning: an AdaBoost classifier learns the appearance of vehicles in low resolution and is applied within a sliding window algorithm to detect vehicles inside a region of interest determined by the TBD approach. Our main contribution lies in the identification, optimization, and evaluation of the most important parameters to achieve both high detection rates and real-time processing.
Proceedings of SPIE | 2011
Michael Teutsch; Wolfgang Krüger; Norbert Heinze
Small and medium sized UAVs like German LUNA have long endurance and define in combination with sophisticated image exploitation algorithms a very cost efficient platform for surveillance. At Fraunhofer IOSB, we have developed the video exploitation system ABUL with the target to meet the demands of small and medium sized UAVs. Several image exploitation algorithms such as multi-resolution, super-resolution, image stabilization, geocoded mosaiking and stereo-images/3D-models have been implemented and are used with several UAV-systems. Among these algorithms is the moving target detection with compensation of sensor motion. Moving objects are of major interest during surveillance missions, but due to movement of the sensor on the UAV and small object size in the images, it is a challenging task to develop reliable detection algorithms under the constraint of real-time demands on limited hardware resources. Based on compensation of sensor motion by fast and robust estimation of geometric transformations between images, independent motion is detected relatively to the static background. From independent motion cues, regions of interest (bounding-boxes) are generated and used as initial object hypotheses. A novel classification module is introduced to perform an appearance-based analysis of the hypotheses. Various texture features are extracted and evaluated automatically for achieving a good feature selection to successfully classify vehicles and people.
Proceedings of SPIE | 2014
Günter Saur; Wolfgang Krüger; Arne Schumann
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Proceedings of SPIE | 2010
N. Heinze; M. Esswein; Wolfgang Krüger; G. Saur
UAV have a growing importance for reconnaissance and surveillance. Due to improved technical capability also small UAVs have an endurance of about 6 hours, but less sophisticated sensors due to strong weight limitations. This puts a high strain and workload on the small teams usually deployed with such systems. To lessen the strain for photo interpreters and to improve the capability of such systems we have developed and integrated automatic image exploitation algorithms. An import aspect is the detection of moving objects to give the photo interpreter (PI) hints were such objects are. Mosaiking of imagery helps to gain better oversight over the scene. By computing stereo-mosaics from mono-ocular video-data also 3-d-models can be derived from tactical UAV-data in a further processing step. A special instrument of gaining oversight is to use multi-temporal and multifocal images of video-sensors with different resolution of the platform and to fusion them into one image. This results in a good situation awareness of the scene with a light-weight sensor-platform and a standard video link.
Image and Signal Processing for Remote Sensing XVIII | 2012
Günter Saur; Wolfgang Krüger
In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor’s footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.
Proceedings of SPIE | 2017
Jutta Hild; Wolfgang Krüger; Stefan Brüstle; Patrick Trantelle; Gabriel Unmüßig; Michael Voit; Norbert Heinze; Elisabeth Peinsipp-Byma; Jürgen Beyerer
Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface – again, we compare gaze-based and traditional mouse-based interaction – we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.
Proceedings of SPIE | 2016
Jutta Hild; Wolfgang Krüger; Norbert Heinze; Elisabeth Peinsipp-Byma; Jürgen Beyerer
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator’s visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer’s perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object’s image region in order to start the tracking algorithm.