Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Wilko Sommer is active.

Publication


Featured researches published by Lars Wilko Sommer.


workshop on applications of computer vision | 2016

A survey on moving object detection for wide area motion imagery

Lars Wilko Sommer; Michael Teutsch; Tobias Schuchert; Jürgen Beyerer

Wide Area Motion Imagery (WAMI) enables the surveillance of tens of square kilometers with one airborne sensor Each image can contain thousands of moving objects. Applications such as driver behavior analysis or traffic monitoring require precise multiple object tracking that is dependent on initial detections. However, low object resolution, dense traffic, and imprecise image alignment lead to split, merged, and missing detections. No systematic evaluation of moving object detection exists so far although many approaches have been presented in the literature. This paper provides a detailed overview of existing methods for moving object detection in WAMI data. Also we propose a novel combination of short-term background subtraction and suppression of image alignment errors by pixel neighborhood consideration. In total, eleven methods are systematically evaluated using more than 160,000 ground truth detections of the WPAFB 2009 dataset. Best performance with respect to precision and recall is achieved by the proposed one.


workshop on applications of computer vision | 2017

Fast Deep Vehicle Detection in Aerial Images

Lars Wilko Sommer; Tobias Schuchert; Jürgen Beyerer

Vehicle detection in aerial images is a crucial image processing step for many applications like screening of large areas. In recent years, several deep learning based frameworks have been proposed for object detection. However, these detectors were developed for datasets that considerably differ from aerial images. In this paper, we systematically investigate the potential of Fast R-CNN and Faster R-CNN for aerial images, which achieve top performing results on common detection benchmark datasets. Therefore, the applicability of 8 state-of-the-art object proposals methods used to generate a set of candidate regions and of both detectors is examined. Relevant adaptations of the object proposals methods are provided. To overcome shortcomings of the original approach in case of handling small instances, we further propose our own network that clearly outperforms state-of-the-art methods for vehicle detection in aerial images. All experiments are performed on two publicly available datasets to account for differing characteristics such as ground sampling distance, number of objects per image and varying backgrounds.


2016 IEEE Winter Applications of Computer Vision Workshops (WACVW) | 2016

Low resolution vehicle re-identification based on appearance features for wide area motion imagery

Mickael Cormier; Lars Wilko Sommer; Michael Teutsch

The description of vehicle appearance in Wide Area Motion Imagery (WAMI) data is challenging due to low resolution and renunciation of color. However, appearance information can effectively support multiple object tracking or queries in a real-time vehicle database. In this paper, we present a systematic evaluation of existing appearance descriptors that are applicable to low resolution vehicle reidentification in WAMI data. The problem is formulated as a one-to-many re-identification problem in a closed-set, where a query vehicle has to be found in a list of candidates that is ranked w.r.t. their matching similarity. For our evaluation we use a subset of the WPAFB 2009 dataset. Most promising results are achieved by a combined descriptor of Local Binary Patterns (LBP) and Local Variance Measure (VAR) applied to local grid cells of the image. Our results can be used to improve appearance based multiple object tracking algorithms and real-time vehicle database search algorithms.


Automatic Target Recognition XXVIII | 2018

Systematic evaluation of deep learning based detection frameworks for aerial imagery

Lars Wilko Sommer; Arne Schumann; Lucas Steinmann; Jürgen Beyerer

Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.


advanced video and signal based surveillance | 2017

Flying object detection for automatic UAV recognition

Lars Wilko Sommer; Arne Schumann; Thomas Müller; Tobias Schuchert; Jürgen Beyerer

With the increasing use of unmanned aerial vehicles (UAVs) by consumers, automatic UAV detection systems have become increasingly important for security services. In such a system, video imagery is a core modality for the detection task, because it can cover large areas and is very cost-effective to acquire. Many detection systems consist of two parts: flying object detection and subsequent object classification. In this work, we investigate the suitability of a number of flying object detection approaches for the task of UAV detection based on video data from static and moving cameras. We compare approaches based on image differencing with object proposal detectors which are learned from data. Finally, we classify each detection by a convolutional neural network (CNN) into the classes UAV or clutter. Our approach is evaluated on six sequences of challenging real world data which contain multiple UAVs, birds, and background motion.


advanced video and signal based surveillance | 2017

Deep cross-domain flying object classification for robust UAV detection

Arne Schumann; Lars Wilko Sommer; Johannes Klatte; Tobias Schuchert; Jürgen Beyerer

Recent progress in the development of unmanned aerial vehicles (UAVs) causes serious safety issues for mass events and safety-sensitive locations like prisons or airports. To address these concerns, robust UAV detection systems are required. In this work, we propose an UAV detection framework based on video images. Depending on whether the video images are recorded by static cameras or moving cameras, we initially detect regions that are likely to contain an object by median background subtraction or a deep learning based object proposal method, respectively. Then, the detected regions are classified into UAV or distractors, such as birds, by applying a convolutional neural network (CNN) classifier. To train this classifier, we use our own dataset comprised of crawled and self-acquired drone images, as well as bird images from a publicly available dataset. We show that, even across a significant domain gap, the resulting classifier can successfully identify UAVs in our target dataset. We evaluate our UAV detection framework on six challenging video sequences that contain UAVs at different distances as well as birds and background motion.


Electro-Optical Remote Sensing X, Edingburgh, UK, September 26,2016. Ed.: G. Kamerman | 2016

Generating object proposals for improved object detection in aerial images

Lars Wilko Sommer; Tobias Schuchert; Jürgen Beyerer

Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm and to baseline approaches like sliding window on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset to show how the number of candidate windows to classify can be clearly reduced.


2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS) | 2016

A comprehensive study on object proposals methods for vehicle detection in aerial images

Lars Wilko Sommer; Tobias Schuchert; Jürgen Beyerer

Detecting vehicles in aerial images is an important task in many applications such as traffic monitoring or screening of large areas. In general, vehicle detection in aerial images is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, detecting vehicles in a real-time system is limited by the huge number of windows to classify, especially in case of varying object scales, aspect ratios or object orientations. To reduce the high number of windows, we propose to apply so called object proposals methods. In recent years, several object proposals methods have been proposed for generating candidate windows in detection frameworks. However, aerial images differ considerably from datasets that are typically used for exploring such methods. To examine the applicability of such methods for aerial images, we evaluate 11 state-of-the-art object proposals methods on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset. First, we manually modified the provided ground truth data to enable comparison to the generated object proposals. To compensate for the differing characteristics of the aerial images, we adapted seven methods by examining different parameter settings and extensions for each method separately. Finally, we demonstrate the potential of such methods for a detection framework for aerial images as significantly fewer candidate windows are generated in comparison to sliding window.


workshop on applications of computer vision | 2018

Multi Feature Deconvolutional Faster R-CNN for Precise Vehicle Detection in Aerial Imagery

Lars Wilko Sommer; Arne Schumann; Tobias Schuchert; Jürgen Beyerer

Accurate detection of objects in aerial images is an important task for many applications such as traffic monitoring, surveillance, reconnaissance and rescue tasks. Recently, deep learning based detection frameworks clearly improved the detection performance on aerial images compared to conventional methods comprised of hand-crafted features and a classifier within a sliding window approach. These deep learning based detection frameworks use the output of the last convolutional layer as feature map for localization and classification. Due to the small size of objects in aerial images, only shallow layers of standard models like VGG-16 or small networks are applicable in order to provide a sufficiently high feature map resolution. However, high-resolution feature maps offer less semantic and contextual information, which results in approaches being more prone to false alarms due to objects with similar shapes especially in case of tiny objects. In this paper, we extend the Faster R-CNN detection framework to cope this issue. Therefore, we apply a deconvolutional module that up-samples low-dimensional feature maps of deep layers and combines the up-sampled features with the features of shallow layers while the feature map resolution is kept sufficiently high to localize tiny objects. Our proposed deconvolutional framework clearly outperforms state-of-the-art methods on two publicly available datasets.


arXiv: Computer Vision and Pattern Recognition | 2018

A systematic evaluation of recent deep learning architectures for fine-grained vehicle classification

Krassimir Valev; Arne Schumann; Lars Wilko Sommer; Jürgen Beyerer

Fine-grained vehicle classification is the task of classifying make, model, and year of a vehicle. This is a very challenging task, because vehicles of different types but similar color and viewpoint can often look much more similar than vehicles of same type but differing color and viewpoint. Vehicle make, model, and year in combination with vehicle color - are of importance in several applications such as vehicle search, re-identification, tracking, and traffic analysis. In this work we investigate the suitability of several recent landmark convolutional neural network (CNN) architectures, which have shown top results on large scale image classification tasks, for the task of fine-grained classification of vehicles. We compare the performance of the networks VGG16, several ResNets, Inception architectures, the recent DenseNets, and MobileNet. For classification we use the Stanford Cars-196 dataset which features 196 different types of vehicles. We investigate several aspects of CNN training, such as data augmentation and training from scratch vs. fine-tuning. Importantly, we introduce no aspects in the architectures or training process which are specific to vehicle classification. Our final model achieves a state-of-the-art classification accuracy of 94.6% outperforming all related works, even approaches which are specifically tailored for the task, e.g. by including vehicle part detections.

Collaboration


Dive into the Lars Wilko Sommer's collaboration.

Researchain Logo
Decentralizing Knowledge