Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Langmann is active.

Publication


Featured researches published by Benjamin Langmann.


Time-of-Flight and Depth Imaging | 2013

Real-Time Image Stabilization for ToF Cameras on Mobile Platforms

Benjamin Langmann; Klaus Hartmann; Otmar Loffeld

In recent years, depth cameras gained increasing acceptance in the areas of robotics and autonomous systems. However, on mobile platforms depth measurements with continuous wave amplitude modulation Time-of-Flight cameras suffer from motion artifacts, since multiple acquisitions are required in order to compute one depth map (resulting in longer effective exposure times). Some lenses of different manufacturers include image stabilizers, but they are only able to compensate for small image shifts. Moreover, when performing a phase unwrapping based on the acquisition of multiple depth maps with different modulation frequencies, the motion artifacts are significantly more severe. In this paper, a method to compensate camera motions during the acquisition of a single depth map as well as for multiple depth maps is presented. Image shifts are estimated firstly and after normalization the individual phase images are shifted accordingly. The proposed approach is evaluated on different scenes and it is able to facilitate ToF imaging on mobile platforms.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Development and investigation of a long-range time-of-flight and color imaging system.

Benjamin Langmann; Wolfgang Weihs; Klaus Hartmann; Otmar Loffeld

Time-of-flight (Tof) imaging based on the photonic mixer device (PMD) or similar ToF imaging solutions has been limited to short distances in the past, due to limited lighting devices and low sensitivity of ToF imaging chips. Long-range distance measurements are typically the domain of laser scanning systems. In this paper, PMD based medium- and long-range lighting devices working together with a 2-D/3-D camera are presented and several measurement results are discussed. The proposed imaging systems suffer from two systematic limitations in addition to problems due to wind and insufficient lighting: a low lateral resolution of the depth imaging chip and ambiguities in the distance measurements. In order to provide a robust and flexible system, we introduce algorithms to obtain unambiguous depth values (phase unwrapping) and to perform a joint motion compensation and super-resolution. Several experiments were conducted in order to evaluate the components of the multimodal imaging system.


Journal of Sensor and Actuator Networks | 2015

Critical Infrastructure Surveillance Using SecureWireless Sensor Networks

Michael Niedermeier; Xiaobing He; Hermann de Meer; Carsten Buschmann; Klaus Hartmann; Benjamin Langmann; Michael Koch; Stefan Fischer; Dennis Pfisterer

In this work, a secure wireless sensor network (WSN) for the surveillance, monitoring and protection of critical infrastructures was developed. To guarantee the security of the system, the main focus was the implementation of a unique security concept, which includes both security on the communication level, as well as mechanisms that ensure the functional safety during its operation. While there are many theoretical approaches in various subdomains of WSNs—like network structures, communication protocols and security concepts—the construction, implementation and real-life application of these devices is still rare. This work deals with these aforementioned aspects, including all phases from concept-generation to operation of a secure wireless sensor network. While the key focus of this paper lies on the security and safety features of the WSN, the detection, localization and classification capabilities resulting from the interaction of the nodes’ different sensor types are also described.


Computers in Industry | 2013

Increasing the accuracy of Time-of-Flight cameras for machine vision applications

Benjamin Langmann; Klaus Hartmann; Otmar Loffeld

Range imaging based on the Time-of-Flight (ToF) principle evolved largely in recent years. Especially, the lateral resolution, the ability to operate outdoors with sunlight and the sensitivity have been improved. Nevertheless, the acceptance of depth cameras for machine vision in the industry environment is still rather limited. The major shortcoming of ToF depth cameras compared to laser range scanners is their measuring accuracy, which is not sufficient for several applications. In this paper, we firstly introduce several state of the art depth cameras briefly and demonstrate their capabilities. Afterwards, we explore possibilities to increase the radial resolution and the accuracy of ToF depth cameras based on the Photonic Mixer Device (PMD). In general, the usage of higher modulation frequencies promises higher depth resolution but yields on the other hand higher noise levels. Moreover, the accuracy is limited by systematic errors and the measurement are affected by random noise and we show how to minimize and compensate them in industry environments.


international conference on computer vision | 2012

A modular framework for 2d/3d and multi-modal segmentation with joint super-resolution

Benjamin Langmann; Klaus Hartmann; Otmar Loffeld

A versatile multi-image segmentation framework for 2D/3D or multi-modal segmentation is introduced in this paper with possible application in a wide range of machine vision problems. The framework performs a joint segmentation and super-resolution to account for images of unequal resolutions gained from different imaging sensors. This allows to combine high resolution details of one modality with the distinctiveness of another modality. A set of measures is introduced to weight measurements according to their expected reliability and it is utilized in the segmentation as well as the super-resolution. The approach is demonstrated with different experimental setups and the effect of additional modalities as well as of the parameters of the framework are shown.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

Scanning 2D/3D monocular camera

Oliver Lottner; Benjamin Langmann; Wolfgang Weihs; Klaus Hartmann

We present the principal aspects and the concept of a monocular combination of a scanning 3D time-of-flight sensor with a large-scale conventional 2D image sensor. While the 2D sensor profits from the whole field of view of an F-Mount photo film format lens, the smaller-sized 3D sensor is mounted onto a highly precise XY linear move stage. Thus, by the means of macro-scanning, the 3D sensor can be moved to interesting parts of the scene so as to provide a second modality e.g. for classification purposes, or, in the same setup, micro-scanning can be applied to enhance the lateral 3D resolution. A test setup was realized to verify the performance.


Archive | 2014

Wide Area 2D/3D Imaging

Benjamin Langmann

Imaging technology is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade also 3D imaging found increasing acceptance and application, which were largely driven by the development of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. In this thesis the development and the evaluation of medium and long-range 3D cameras are described in order to overcome these limitations. The MultiCam, a monocular 2D/3D camera which incorporates a color and a depth imaging chip, forms the basis for this research. The camera operates on the Time-of-Flight (ToF) principle by emitting modulated infrared light and measuring the round-trip time. In order to apply this kind of camera to larger scenes, novel lighting devices are required and will be presented in the course of this work. On the software side methods for scene observation working with 2D and 3D data are introduced and adapted to large scenes. An extended method for foreground segmentation illustrating the advantages of additional 3D data is presented, but also its limitations due to the lower resolution of the depth maps are addressed. Long-range depth measurements with large focal lengths and 3D imaging on mobile platforms are easily impaired by involuntary camera motions. Therefore, an approach for motion compensation with joint super-resolution is introduced to facilitate ToF imaging in these areas. The camera motion is estimated based on the high resolution color images of the MultiCam and can be interpolated for each phase image, i.e. raw image of the 3D imaging chip. This method was applied successfully under different circumstances. A framework for multi-modal segmentation and joint super-resolution also addresses the lower resolution of the 3D imaging chip. It resolves the resolution mismatch by estimating high resolution depth maps while performing the segmentation. Subsequently, a global multi-modal and multiview tracking approach is described, which is able to take advantage of any type and number of cameras. Objects are modeled with ellipsoids and their appearance is modeled with color histograms as well as density estimates. The thesis concludes with remarks on future developments and the application of depth cameras in new environments.


Archive | 2014

Multi-Modal Background Subtraction

Benjamin Langmann

Background subtraction is a common first step in the field of video processing and its purpose is to reduce the effective image size in subsequent processing steps by segmenting the mostly static background from the moving or changing foreground. In this chapter approaches towards background modeling previously described are extended to handle 2D/3D videos.


NEW2AN | 2013

MOVEDETECT – Secure Detection, Localization and Classification in Wireless Sensor Networks

Benjamin Langmann; Michael Niedermeier; Hermann de Meer; Carsten Buschmann; Michael Koch; Dennis Pfisterer; Stefan Fischer; Klaus Hartmann

In this paper a secure wireless sensor network (WSN) developed within the MOVEDETECT project is presented. The goal of the project was to design, implement and demonstrate a secure WSN for the protection of critical infrastructure. In order to provide a reliable service, the system must detect any kind of tampering with the sensor nodes, prevent eavesdropping and manipulation of the communication as well as detect, track and classify intruders in the protected region. Therefore based on previous experiences, a real-world WSN was developed, which addresses practical issues like water proofing, energy consumption, sensor deployment and visualization of the WSN state, but also provides a unique security concept, a interesting combination of sensors and sophisticated sensor data processing and analysis. The system was evaluated by examining firstly the sensors and the sensor processing algorithms and then conducting realistic field test.


international symposium on visual computing | 2012

Depth Auto-calibration for Range Cameras Based on 3D Geometry Reconstruction

Benjamin Langmann; Klaus Hartmann; Otmar Loffeld

An approach for auto-calibration and validation of depth measurements gained from range cameras is introduced. Firstly, the geometry of the scene is reconstructed and its surface normals are computed. These normal vectors are segmented in 3D with the Mean-Shift algorithm and large planes like walls or the ground plane are recovered. The 3D reconstruction of the scene geometry is then utilized in a novel approach to derive principal camera parameters for range or depth cameras. It operates based on a single range image alone and does not require special equipment such as markers or a checkerboard and no specific measurement procedures as are necessary for previous methods. The fact that wrong camera parameters deform the geometry of the objects in the scene is utilized to infer the constant depth error (the phase offset for continuous wave ToF cameras) as well as the focal length. The proposed method is applied to ToF cameras which are based on the Photonic Mixer Device to measure the depth of objects in the scene. Its capabilities as well as its current and systematic limitations are addressed and demonstrated.

Collaboration


Dive into the Benjamin Langmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seyed Eghbal Ghobadi

Folkwang University of the Arts

View shared research outputs
Researchain Logo
Decentralizing Knowledge