Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Wellig is active.

Publication


Featured researches published by Peter Wellig.


Unmanned/Unattended Sensors and Sensor Networks XI; and Advanced Free-Space Optical Communication Techniques and Applications | 2015

Detection and tracking of drones using advanced acoustic cameras

Joël Busset; Florian Perrodin; Peter Wellig; Beat Ott; Kurt Heutschi; Torben Rühl; Thomas Nussbaumer

Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.


Proceedings of SPIE | 2014

A new omni-directional multi-camera system for high resolution surveillance

Ömer Cogal; Abdulkadir Akin; Kerem Seyid; Vladan Popovic; Alexandre Schmid; Beat Ott; Peter Wellig; Yusuf Leblebici

Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor’s image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.


Target and Background Signatures | 2015

Human factors of target detection tasks within heavily cluttered video scenes

Samuel Huber; Peter Wellig

Background: Algorithms show difficulties in distinguishing weak signals of a target from a cluttered background, a task that humans tend to master relatively easily. We conducted two studies to identify how various degrees of clutter influence operator performance and search patterns in a visual target detection task. Methods: First, 8 male subjects had to look for specific female targets within a heavily cluttered public area. Subjects were supported by differing amounts of markings that helped them to identify females in general. We presented video clips and analyzed the search patterns. Second, 18 subject matter experts had to identify targets on a heavily frequented motorway intersection. We presented them with video material from a UAV (Unmanned Aerial Vehicle) surveillance mission. The video image was subdivided in three zones: The central zone (CZ), a circle area of 10°. The peripheral zone (PZ) corresponding to a 4:3 format and the hyper peripheral zone (HPZ) which represented the lateral region specific to the 16:9 format. We analyzed fixation densities and task performance. Results: We found an approximately U-shaped correlation between the number of markings in a video and the degree of structure in search patterns as well as performance. For the motorway surveillance task we found a difference in mean detection time for CZ vs. HPZ (p=0.01) and PZ vs. HPZ (p=0.003) but no difference for CZ vs. PZ (p=0.491). There were no differences in detection rate for the respective zones. We found the highest fixation density in CZ decreasing towards HPZ. Conclusion: We were able to demonstrate that markings could increase surveillance operator performance in a cluttered environment as long as their number is kept in an optimal range. When performing a search task within a heavily cluttered environment, humans tend to show rather erratic search patterns and spend more time watching central picture areas.


Proceedings of the SPIE Conference on Defense & Security Europe 2015 | 2015

Real-time object detection and tracking in omni-directional surveillance using GPU

Florian Vincent Depraz; Vladan Popovic; Beat Ott; Peter Wellig; Yusuf Leblebici

Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps) [1]. Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time. Graphics Processing Units (GPUs) are powerful devices with lots of processing capabilities for parallel jobs. The detection of objects in a scene requires large amount of independent pixel operations on the video frames that can be done in parallel, making GPU a good choice for the processing platform. This paper only concentrates on Background Subtraction Techniques [2] to detect the objects present in the scene. The foreground pixels are extracted from the processed frame and compared to the corresponding ones of the model. Using a connected- component detector, neighboring pixels are gathered in order to form blobs which correspond to the detected foreground objects. The new blobs are compared to the blobs formed in the previous frame to see if the corresponding object moved.


Target and Background Signatures II | 2016

Numerical RCS and micro-Doppler investigations of a consumer UAV

Arne Schroder; Uwe Aulenbacher; Matthias Renker; Urs Boniger; Roland Oechslin; Axel Murk; Peter Wellig

This contribution gives an overview of recent investigations regarding the detection of a consumer market unmanned aerial vehicles (UAV). The steadily increasing number of such drones gives rise to the threat of UAVs interfering civil air traffic. Technologies for monitoring UAVs which are flying in restricted air space, i. e. close to airports or even over airports, are desperately needed. One promising way for tracking drones is to employ radar systems. For the detection and classification of UAVs, the knowledge about their radar cross section (RCS) and micro-Doppler signature is of particular importance. We have carried out numerical and experimental studies of the RCS and the micro-Doppler of an example commercial drone in order to study its detectability with radar systems.


Target And Background Signatures Ii | 2016

Detection of mini-UAVs in the presence of strong topographic relief: a multisensor perspective

Urs Boniger; Beat Ott; Peter Wellig; Uwe Aulenbacher; Jens Klare; Thomas Nussbaumer; Yusuf Leblebici

Based on the steadily growing use of mini-UAVs for numerous civilian and military applications, mini-UAVs have been recognized as an increasing potential threat. Therefore, counter-UAV solutions addressing the peculiarities of this class of UAVs have recently received a significant amount of attention. Reliable detection, localization, identification and tracking represents a fundamental prerequisite for such counter-UAV systems. In this paper, we focus on the assessment of different sensor technologies and their ability to detect mini-UAVs in a representative rural Swiss environment. We conducted a field trial in August 2015, using different, primarily short range, experimental sensor systems from armasuisse and selected research partners. After an introduction into the challenges for UAV detection in regions with strong topographic relief, we will introduce the experimental setup and describe the key results from this joint experiment.


international geoscience and remote sensing symposium | 2012

EOSAR, a SAR-scene simulator based upon real target and background signatures

Sergei Bokaderov; Anika Maresch; Hartmut Schimpf; Helmut Essen; Peter Wellig

Algorithms for automatic target recognition and image intelligence have to be trained on the largest possible data base. A way to avoid excessive and costly measurement campaigns is to insert targets that were measured in a tower/turntable configuration, into pre-existing scenes of a synthetic aperture radar (SAR). This blending has to be performed within the SAR processing chain such that the result is identical to a measurement with the target present in the scene during the SAR overflight.


Proceedings of SPIE, the International Society for Optical Engineering | 2010

Polarimetric imaging with the 91GHz radiometer SPIRA

Axel Murk; Oliver Stähli; Christian Mätzler; Marco Canavero; Roland Oechslin; Peter Wellig; Denis Notel; H. Essen

The Scanning Polarimetric Imaging Radiometer (SPIRA) is a passive microwave imaging system operating around 91 GHz. It consists of a two orthogonally polarized receiver channels and an analog adding correlator network with 2 GHz bandwidth, which can measure all four Stokes parameters simultaneously by scanning the scene with an offset parabolic reflector on an elevation over azimuth scanner. In October 2008 the SPIRA instrument has participated in the joint Swiss-German Radiometer Experiment Thun where it has been operated in parallel with two PMMW systems of Fraunhofer Institut fur Hochfrequenzphysik und Radartechnik and an IR camera. During this measurement campaign different camouflage kits, vehicles and persons with hidden threats have been observed together with reference objects. This paper gives an overview of the three different instruments and discusses selected images of the joint measurement campaign.


Target and Background Signatures IV | 2018

Target detection with deep learning in polarimetric imaging

Selman Ergunay; Peter Wellig; Yusuf Leblebici; Suha Kose; Beat Ott

Polarimetric imaging techniques demonstrate enhanced capabilities in advanced object detection tasks with their capability to discriminate man-made objects from natural background surfaces. While spectral signatures carry information only about material properties, the polarization state of an optical field contains information related to surface features of objects, such as, shape and roughness. With these additional benefits, polarimetric imaging reveal physical properties operable for advanced object detection tasks which are not possible to acquire by using conventional imaging. In this work, the primary objective is to utilize the state-of-the-art deep learning models designed for object detection tasks using images obtained by polarimetric systems. In order to train deep learning models, it is necessary to have a sufficiently large dataset consisting of polarimetric images with various classes of objects in them. We started by constructing such dataset with adequate number of visual and infrared (SWIR) polarimetric images obtained using polarimetric imaging systems and masking relevant parts for object detection models. We managed to achieve a high performance score while detecting vehicles with metallic surfaces using polarimetric imaging. Even with limited number of training samples, polarimetric imaging demonstrated superior performance comparing to models trained using conventional imaging techniques. We observed that using models trained with both polarimetric and conventional imaging techniques in parallel gives the best performance score since these models were able to compensate for each others lacking points. In the subsequent stages, we plan to expand the study to the application of spiking neural network (SNN) architectures for implementing the detection/classification tasks.


ieee radar conference | 2017

Micro-UAV detection using DAB-based passive radar

Christof Schupbach; Christian Patry; F.D.V. Maasdorp; Urs Boniger; Peter Wellig

Consumer-level micro unmanned aerial vehicles (UAV) are becoming an ever increasing threat to personal privacy and public safety. Due to their size, velocity and low flight altitudes, they are difficult to detect and new techniques have to be investigated. The present work demonstrates the detection of a fixed-wing micro UAV using passive radar based on digital audio broadcasting signals up to a distance of 1.2 km. This is the first such demonstration in the VHF band.

Collaboration


Dive into the Peter Wellig's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yusuf Leblebici

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vladan Popovic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge