Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josef Pauli is active.

Publication


Featured researches published by Josef Pauli.


ieee intelligent vehicles symposium | 2012

A novel multi-lane detection and tracking system

Kun Zhao; Mirko Meuter; Christian Nunn; Dennis Müller; Stefan Müller-Schneiders; Josef Pauli

In this paper a novel spline-based multi-lane detection and tracking system is proposed. Reliable lane detection and tracking is an important component of lane departure warning systems, lane keeping support systems or lane change assistance systems. The major novelty of the proposed approach is the usage of the so-called Catmull-Rom spline in combination with the extended Kalman filter tracking. The new spline-based model enables an accurate and flexible modeling of the lane markings. At the same time the application of the extended Kalman filter contributes significantly to the system robustness and stability. There is no assumption about the parallelism or the shapes of the lane markings in our method. The number of lane markings is also not restrained, instead each lane marking is separately modeled and tracked. The system runs on a standard PC in real time (i.e. 30 fps) with WVGA image resolution (752 × 480). The test vehicle has been driven on the roads with challenging scenarios, like worn out lane markings, construction sites, narrow corners, exits and entries of the highways, etc., and good performance has been demonstrated. The quantitative evaluation has been performed using manually annotated video sequences.


international conference on intelligent transportation systems | 2009

Time to contact estimation using interest points

Dennis Müller; Josef Pauli; Christian Nunn; Steffen Görmer; Stefan Müller-Schneiders

This paper presents a novel approach to obtain reliable and robust time-to-contact estimates from a monocular moving camera observing various obstacles. The algorithm utilizes interest points to measure the relative scale change of an obstacle and applies robust estimation techniques to combine the different measurements into one of three possible motion models. These include a model with constant distance, with constant velocity and with constant acceleration. An interacting multiple model framework is used to select the appropriate model and finally to estimate the time-to-contact with the observed obstacle. The algorithm presented is evaluated utilizing a large set of recorded video sequences with radar ground truth. Due to its field of application the entire algorithm is designed to use as little computation time as possible and is thus realtime capable.


Machine Learning | 1998

Learning to Recognize and Grasp Objects

Josef Pauli

We apply techniques of computer vision and neural network learning to get a versatile robot manipulator. All work conducted follows the principle of autonomous learning from visual demonstration. The user must demonstra te the relevant objects, situations, and/or actions, and the robot vision system must learn from those. For approaching and grasping technical objects three principal tasks have to be done—calibrating the camera-robot coordination, detecting the desired object in the images, and choosing a stable grasping pose. These procedures are based on (nonlinear) functions, which are not known a priori and therefore have to be learned. We uniformly approximate the necessary functions by networks of gaussian basis functions (GBF networks). By modifying the number of basis functions and/or the size of the gaussian support the quality of the function approximation changes. The appropriate configuration is learned in the training phase and applied during the operation phase. All experiments are carried out in real world applications using an industrial articulation robot manipulator and the computer vision system KHOROS.


hellenic conference on artificial intelligence | 2010

Unsupervised recognition of ADLs

Todor Dimitrov; Josef Pauli; Edwin Naroska

In this paper we present an approach to the unsupervised recognition of activities of daily living (ADLs) in the context of smart environments The developed system utilizes background domain knowledge about the user activities and the environment in combination with probabilistic reasoning methods in order to build best possible explanation of the observed stream of sensor events The main advantage over traditional methods, e.g dynamic Bayesian models, lies in the ability to deploy the solution in different environments without needing to undergo a training phase To demonstrate this, tests with recorded data sets from two ambient intelligence labs have been conducted The results show that even using basic semantic modeling of how the user behaves and how his/her behavior is reflected in the environment, it is possible to draw conclusions about the certainty and the frequencies with which certain activities are performed.


Robotics and Autonomous Systems | 2001

Vision-based integrated system for object inspection and handling

Josef Pauli; Arne Schmidt; Gerald Sommer

Abstract Image-based effector servoing is a process of perception–action cycles for handling a robot effector under continual visual feedback. This paper applies visual servoing mechanisms not only for handling objects, but also for camera calibration and object inspection. A 6-DOF manipulator and a stereo camera head are mounted on separate platforms and are steered independently. In a first phase (calibration phase), camera features are determined like the optical axes and the fields of sharp view. In the second phase (inspection phase), the robot hand carries an object into the field of view of one camera, then approaches the object along the optical axis to the camera, rotates the object for reaching an optimal view, and finally the object shape is inspected in detail. In the third phase (assembly phase), the system localizes a board containing holes of different shapes, determines the hole which fits most appropriate to the object shape, then approaches and arranges the object appropriately. The final object insertion is based on haptic sensors, but is not treated in the paper. At present, the robot system has the competence to handle cylindrical and cuboid pegs. For handling other object categories the system can be extended with more sophisticated strategies of the inspection and/or assembly phase.


Pattern Recognition Letters | 2002

Perceptual organization with image formation compatibilities

Josef Pauli; Gerald Sommer

The work presents a methodology contributing to boundary extraction in images of approximate polyhedral objects. We make extensive use of basic principles underlying the process of image formation and thus reduce the role of object-specific knowledge. Simple configurations of line segments are extracted subject to geometric-photometric compatibilities. The perceptual organization into polygonal arrangements is based on geometric regularity compatibilities under projective transformation. The combination of several types of compatibilities yields a saliency function for extracting a list of most salient structures. Based on systematic measurements during an experimentation phase the adequacy and degrees of compatibilities are determined. The methodology is demonstrated for objects of various shapes located in cluttered scenes.


workshop on middleware for pervasive and ad hoc computing | 2007

A probabilistic reasoning framework for smart homes

Todor Dimitrov; Josef Pauli; Edwin Naroska

Inference and reasoning in modern AmI (Ambient Intelligence) middlewares is still a complex task. Currently no common patterns for building smart applications can be identified. This paper presents an ongoing effort to build a generic probabilistic reasoning framework for the networked homes. The framework can be utilized for designing smart agents in a systematic and unified way. The developed modeling and reasoning algorithms make an extensive use of the information about the user and the way he/she interacts with the system. To achieve this, several levels of knowledge representation are combined. Each level enriches the domain knowledge in a way that a consistent, user-adaptable probabilistic knowledge base is constructed. The facts in the knowledge base can be used to encode the logic for a specific application scenario.


german conference on pattern recognition | 2014

Graph-Based and Variational Minimization of Statistical Cost Functionals for 3D Segmentation of Aortic Dissections

Cosmin Adrian Morariu; Tobias Terheiden; Daniel Sebastian Dohle; Konstantinos Tsagakis; Josef Pauli

The objective of this contribution consists in segmenting dissected aortas in computed tomography angiography (CTA) data in order to obtain morphological specifics of each patient’s vessel. Custom-designed stent-grafts represent the only possibility to enable minimally invasive endovascular techniques concerning Type A dissections, which emerge within the ascending aorta (AA). The localization of cross-sectional aortic boundaries within planes orthogonal to a rough aortic centerline relies on a multicriterial 3D graph-based method. In order to consider the often non-circular shape of the dissected aortic cross-sections, the initial circular contour detected in the localization step undergoes a deformation process in 2D, steered by either local or global statistical distribution metrics. The automatic segmentation provided by our novel approach, which widely applies for the delineation of tubular structures of variable shapes and heterogeneous intensities, is compared with ground truth provided by a vascular surgeon for 11 CTA datasets.


Lecture Notes in Computer Science | 2001

Servoing Mechanisms for Peg-In-Hole Assembly Operations

Josef Pauli; Arne Schmidt; Gerald Sommer

Image-based effector servoing is a process of perception-action cycles for handling a robot effector under continual visual feedback. Apart from the primary goal of manipulating objects we apply servoing mechanisms also for determining camera features, e.g. the optical axes of cameras, and for actively changing the view, e.g. for inspecting the object shape. A peg-in-hole application is treated by a 6-DOF manipulator and a stereo camera head. The two robot components are mounted on separate platforms and can be steered independently. In the first phase (inspection phase), the robot hand carries an object into the field of view of one camera, then approaches the object along the optical axis to the camera, rotates the object for reaching an optimal view, and finally inspects the object shape in detail. In the second phase (insertion phase), the system localizes a board containing holes of different shapes, determines the relevant hole based on the extracted object shape, then approaches the object, and finally inserts it into the hole. At present, the robot system has the competence to handle cylindrical and cuboid pegs. For treating more complicated objects the system must be extended with more sophisticated strategies for the inspection and/or insertion phase.


industrial and engineering applications of artificial intelligence and expert systems | 1992

Automatization in the Design of Image Understanding Systems

Bernd Radig; Wolfgang Eckstein; Karlhorst Klotz; Tilo Messer; Josef Pauli

To understand the meaning of an image or image sequence, to reduce theeffort in the design process and increase the reliability and the reusability of image understanding systems, a wide spectrum of AI techniques is applied. Solving an image understanding problem corresponds to specifying an image understanding system which implements the solution to the given problem. We describe an image understanding toolbox which supports the design of such systems. The toolbox includes help and tutor modules, an interactive user interface, interfaces to common procedural and AI languages, and an automatic configuration module.

Collaboration


Dive into the Josef Pauli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Herwig

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabian Bürger

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Jens Hoefinghoff

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Tobias Terheiden

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Zhenyu Tang

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge