Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Weckesser is active.

Publication


Featured researches published by Peter Weckesser.


intelligent robots and systems | 1995

Multiple sensor processing for high-precision navigation and environmental modeling with a mobile robot

Peter Weckesser; Rüdiger Dillmann; M. Elbs; S. Hampel

In this paper an approach to real-time position correction and environmental modeling based on odometry, ultrasonic sensing, structured light sensing and active stereo vision (bin- and trinocular) is presented. Odometry provides the robot with a position estimation and with the help of a model of the environment sensor perceptions can be matched to predictions. Ultrasonic sensing is capable of collision avoidance and obstacle detection and so enables navigation in simply structured environments. Model-based image processing allows detection and classification of natural landmarks in the stereo images uniquely. With only one observation the robots position and orientation relative to the observed landmark is found precisely. This sensing strategy is used when high precision is necessary for the performance of the navigation task. Finally techniques are described that allow an automatic mapping of an unknown or only partially known environment.


Robotics and Autonomous Systems | 1998

Modeling unknown environments with a mobile robot

Peter Weckesser; Rüdiger Dillmann

The exploration of an unknown environment is an important task for the new generation of mobile service robots. These robots are supposed to operate in dynamic and changing environments together with human beings and other static or moving objects. Sensors that are capable of providing the quality of information that is required for the described scenario are optical sensors like digital cameras and laserscanners. In this paper sensor integration and fusion for such sensors is described. Complementary sensor information is transformed into a common representation in order to achieve a cooperating sensor system. Sensor fusion is performed by matching the local perception of a laserscanner and a camera system with a global model that is being built up incrementally. The Mahalanobis-distance is used as matching criterion and a Kalman-filter is used to fuse matching features. A common representation including the uncertainty and the confidence is used for all scene features. The systems performance is demonstrated for the task of exploring an unknown environment and incrementally building up a geometrical model of it.


intelligent robots and systems | 1997

Navigating a mobile service-robot in a natural environment using sensor-fusion techniques

Peter Weckesser; Rüdiger Dillmann; Ulrich Rembold

The mobile service robots described are designed to operate in dynamics and changing environments together with human beings and other static or moving objects. Sensors that are capable of providing the quality of information that is required for the described scenario are optical sensors, like digital cameras and laser scanners. In this paper the sensor integration and fusion for such sensors is described. Complementary sensor information is transformed into a common representation in order to achieve a cooperating sensor system. Sensor fusion is performed by matching the local perception of the laser scanner and camera system with a global model that is being build incrementally. The Mahalanobis distance is used as matching criterion and a Kalman filter is used to fuse matching features. A common representation including the uncertainty and the confidence is used for all scene features. The systems performance is demonstrated for the task of exploring an unknown environment and incrementally building of the geometrical model.


intelligent robots and systems | 1996

Learning coordination skills in multi-agent systems

Michael Kaiser; Rüdiger Dillmann; Holger Friedrich; I-Shen Lin; Frank Wallner; Peter Weckesser

While distributed control architectures have many advantages over centralized ones, such as their inherent modularity and fault tolerance, a major problem of such architectures is to ensure the goal-oriented behaviour of the controlled system. This paper presents a framework within which the coordination skills required for goal-orientedness are learned from user demonstrations. The framework is based on a state-space model of the single agents building the system and a corresponding model of the coordination mechanism. Our mobile robot PRIAMOS provides an application example.


intelligent robots and systems | 1996

Exploration of the environment with an active and intelligent optical sensor system

Peter Weckesser; Guido Appenzeller; A. von Essen; Rüdiger Dillmann

The exploration and mapping of unknown environments is on important task for the new generation of mobile service robots. These robots are supposed to operate in dynamic and changing environments together with humans and in interaction with other stationary or moving objects. This requires a high flexibility and adaptability of the sensor-system to changing environmental conditions. Sensors that are capable of providing the quality of information that is required for the described scenario are optical sensors like digital cameras and laserscanners. In this paper a sensor system and an architecture for active control of the sensors and adaptive processing of the perceived sensor data are developed for service applications and experimentally evaluated.


intelligent robots and systems | 1996

Active parameter control for the low level vision system of a mobile robot

Guido Appenzeller; Peter Weckesser; R. Dilimann

Computer vision systems are today an important sensor for intelligent robotic systems. However, the design of a vision system that a robot can use as a fast and robust sensor in a complex, partially unknown and dynamic environment is still difficult. A main reason for this is that the parameters of vision systems are often adjusted by hand and remain static during the operation of the robot. In this paper we present a general architecture that adapts the the parameters of a segment based low-level vision system dynamically to increase its speed and robustness. Adaptation is done to a priori knowledge about the environment or to the sensor data itself. The architecture is implemented on a mobile robot using special hardware that allows real-time operation. Quantitative experimental data on its performance is given.


international conference on multisensor fusion and integration for intelligent systems | 1996

Sensor-fusion of intensity- and laser range-images

Peter Weckesser; Rüdiger Dillmann; U. Rembold

The exploration of unknown environment is an important task for the new generation of mobile service robots. These robots are supposed to operate in dynamic and changing environments together with human beings and other static or moving objects. Sensors that are capable of providing the quality of information that is required for the described scenario are optical sensors like digital cameras and laser scanners. In this paper sensor integration and fusion for such sensors is described. Complementary sensor information is transformed into a common representation in order to achieve a cooperating sensor system. Sensor fusion is performed by matching the local perception of a laser scanner and a camera system with a global model that is being built up incrementally. The Mahalanobis distance is used as matching criterion and a Kalman filter is used to fuse matching features. A common representation including the uncertainty and the confidence is used for all scene features. The systems performance is demonstrated for the task of exploring an unknown environment and incrementally building up a geometrical model of it.


IFAC Proceedings Volumes | 1998

Autonomous Roomservice in a Hotel 1

René Graf; Peter Weckesser

Abstract Mobile robots are no longer only used in laboratories, but become more and more important in service applications. A very interesting and difficult environment is a hotel. This paper presents this new field of application for a service robot. After an introduction to the environment, the sensor equipment and the sensor data processing is explained.


Time-Varying Image Processing and Moving Object Recognition, 4#R##N#Proceedings of the 5th International Workshop Florence, Italy, September 5–6, 1996 | 1997

Exploration of the environment with optical sensors mounted on a mobile robot

Peter Weckesser; A. von Essen; Guido Appenzeller; Rüdiger Dillmann

Publisher Summary This chapter presents an approach to fuse sensor information from complementary sensors. The mobile robot PRIAMOS is used as an experimental testbed. A multisensor system supports the vehicle with odometric, sonar, visual, and laser scanner information. Sensor fusion is performed by matching the local perception of a laser scanner and a camera system with a global model that is being built up incrementally. The Mahalanobis distance is used as matching criterion and a Kalman filter is used to fuse matching features. The goal of the chapter is to develop and to apply sensor fusion techniques in order to improve the systems performance for mobile robot positioning and exploration of unknown environments. The approach is able to deal with static as well as dynamic environments. On different levels of processing geometrical, topological and semantical models are generated (exploration) or can be used as a priori information (positioning). The systems performance is demonstrated for the task of building a geometrical model of an unknown environment.


Modelling and Planning for Sensor Based Intelligent Robot Systems | 1994

PRIAMOS: An Advanced Moblie System for Service, Inspection, and Surveillance Tasks.

Rüdiger Dillmann; Michael Kaiser; Frank Wallner; Peter Weckesser

Collaboration


Dive into the Peter Weckesser's collaboration.

Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Frank Wallner

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. von Essen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Kaiser

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Holger Friedrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

I-Shen Lin

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Elbs

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

R. Dilimann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

S. Hampel

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge