Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wolfgang Ponweiser is active.

Publication


Featured researches published by Wolfgang Ponweiser.


parallel problem solving from nature | 2010

On expected-improvement criteria for model-based multi-objective optimization

Tobias Wagner; Michael Emmerich; André H. Deutz; Wolfgang Ponweiser

Surrogate models, as used for the Design and Analysis of Computer Experiments (DACE), can significantly reduce the resources necessary in cases of expensive evaluations. They provide a prediction of the objective and of the corresponding uncertainty, which can then be combined to a figure of merit for a sequential optimization. In singleobjective optimization, the expected improvement (EI) has proven to provide a combination that balances successfully between local and global search. Thus, it has recently been adapted to evolutionary multi-objective optimization (EMO) in different ways. In this paper, we provide an overview of the existing EI extensions for EMO and propose new formulations of the EI based on the hypervolume. We set up a list of necessary and desirable properties, which is used to reveal the strengths and weaknesses of the criteria by both theoretical and experimental analyses.


world congress on computational intelligence | 2008

Clustered multiple generalized expected improvement: A novel infill sampling criterion for surrogate models

Wolfgang Ponweiser; Tobias Wagner; Markus Vincze

Surrogate model-based optimization is a well-known technique for optimizing expensive black-box functions. By applying this function approximation, the number of real problem evaluations can be reduced because the optimization is performed on the model. In this case two contradictory targets have to be achieved: increasing global model accuracy and exploiting potentially optimal areas. The key to these targets is the criterion for selecting the next point, which is then evaluated on the expensive black-box function - the dasiainfill sampling criterionpsila. Therefore, a novel approach - the dasiaClustered Multiple Generalized Expected Improvementpsila (CMGEI) - is introduced and motivated by an empirical study. Furthermore, experiments benchmarking its performance compared to the state of the art are presented.


The International Journal of Robotics Research | 2001

Edge-Projected Integration of Image and Model Cues for Robust Model-Based Object Tracking

Markus Vincze; Minu Ayromlou; Wolfgang Ponweiser; Michael Zillich

A real-world limitation of visual servoing approaches is the sensitivity of visual tracking to varying ambient conditions and background clutter. The authors present a model-based vision framework to improve the robustness of edge-based feature tracking. Lines and ellipses are tracked using edge-projected integration of cues (EPIC). EPIC uses cues in regions delineated by edges that are defined by observed edgels and a priori knowledge from a wire-frame model of the object. The edgels are then used for a robust fit of the feature geometry, but at times this results in multiple feature candidates. A final validation step uses the model topology to select the most likely feature candidates. EPIC is suited for real-time operation. Experiments demonstrate operation at frame rate. Navigating a walking robot through an industrial environment shows the robustness to varying lighting conditions. Tracking objects over varying backgrounds indicates robustness to clutter.


Computer Vision and Image Understanding | 2009

Integrated vision system for the semantic interpretation of activities where a person handles objects

Markus Vincze; Michael Zillich; Wolfgang Ponweiser; Václav Hlaváč; Jiri Matas; Stepán Obdrzálek; Hilary Buxton; A. Jonathan Howell; Kingsley Sage; Antonis A. Argyros; Christof Eberst; Gerald Umgeher

Interpretation of human activity is primarily known from surveillance and video analysis tasks and concerned with the persons alone. In this paper we present an integrated system that gives a natural language interpretation of activities where a person handles objects. The system integrates low-level image components such as hand and object tracking, detection and recognition, with high-level processes such as spatio-temporal object relationship generation, posture and gesture recognition, and activity reasoning. A task-oriented approach focuses processing to achieve near real-time and to react depending on the situation context.


international conference on computer vision systems | 2001

A System to Navigate a Robot into a Ship Structure

Markus Vincze; Minu Ayromlou; Carlos Beltran; Antonios Gasteratos; Simon Hoffgaard; Ole Madsen; Wolfgang Ponweiser; Michael Zillich

A prototype system has been built to navigate a walking robot into a ship structure. The robot is equipped with a stereo head for monocular and stereo vision. From the CAD-model of the ship good viewpoints are selected such that the head can look at locations with sufficient features. The edge features for the views are extracted automatically. The pose of the robot is estimated from the features detected by two vision approaches. One approach searches in the full image for junctions and uses the stereo information to extract 3D information. The other method is monocular and tracks 2D edge features. To achieve robust tracking of the features a model-based tracking approach is enhanced with a method of Edge Projected Integration of Cues (EPIC). EPIC uses object knowledge to select the correct features in real-time. The two vision systems are synchronised by sending the images over a fibre channel network. The pose estimation uses both the 2D and 3D features and locates the robot within a few centimetres over the range of ship cells of several metres. Gyros are used to stabilise the head while the robot moves. The system has been developed within the RobVision project and the results of the final demonstration are given.


international conference on evolutionary multi criterion optimization | 2007

The multiple multi objective problem: definition, solution and evaluation

Wolfgang Ponweiser; Markus Vincze

Considering external parameters during any evaluation leads to an optimization problem which has to handle several concurrent multi objective problems at once. This novel challenge, the Multiple Multi Objective Problem M-MOP, is defined and analyzed. Guidelines and metrics for the development of M-MOP optimizers are generated and exemplary demonstrated at an extended version of Debs NSGA-II algorithm. The relationship to the classical MOPs is highlighted and the usage of performance metrics for the M-MOP is discussed. Due to the increased number of dimensions the M-MOP represents a complex optimization task that should be settled in the optimization community.


international conference on pattern recognition | 2004

Integration frameworks for large scale cognitive vision systems - an evaluative study

Sebastian Wrede; Christian Bauckhage; Gerhard Sagerer; Wolfgang Ponweiser; Markus Vincze

Owing to the ever growing complexity of present day computer vision systems, system architecture has become an emerging topic in vision research. Systems that integrate numerous modules and algorithms of different I/O and time scale behavior require sound and reliable concepts for interprocess communication. Consequently, topics and methods known from software and systems engineering are becoming increasingly important. Especially, framework technologies for system integration are required. This contribution results from a cooperation between two multinational projects on cognitive vision. It discusses functional and non-functional requirements in cognitive vision and compares and assesses existing solutions.


Robotics and Autonomous Systems | 2005

A software framework to integrate vision and reasoning components for Cognitive Vision Systems

Wolfgang Ponweiser; Markus Vincze; Michael Zillich

Abstract Cognitive Vision Systems (CVSys) are systems that combine computer vision with reasoning and semantic interpretation components to achieve high-level tasks. Examples are the interpretation of activities a human executes when handling objects or fetching objects with a companion robot. This application requires the use of a large number of functionalities, e.g., perception-action mapping, recognition and categorisation, prediction, reaction to actions, symbolic interpretation, and communication to humans or other systems. Within this contribution, these cognitive vision functionalities are encapsulated in components. The objective is to provide clearly specified components which can be reused in other cognitive vision or robotics systems. To arrive at the level of building a system from these functionalities, it is considered essential to provide a framework that coordinates the components. The framework is built on the service principle, which uses a “Yellow Pages” directory to announce its capabilities and to select other components. The paper summarises the experiences of integrating the components in a context-oriented system for activity interpretation based on task-driven processing of the components. The discussion highlights the advances in vision with respect to exploitation in autonomous robotic systems.


international conference on multisensor fusion and integration for intelligent systems | 2001

RobVision: vision based navigation for mobile robots

Wolfgang Ponweiser; Minu Ayromlou; Markus Vincze; C. Beltran; Ole Madsen; Antonios Gasteratos

This paper introduces the system, developed during the Esprit project RobVision (robust vision for sensing in industrial operations and needs), that navigates a climbing robot through a ship section for inspection and welding tasks. The basic idea is to continuously generate robot position and orientation (pose) signals by matching the visual sensing information from the environment with predetermined CAD-information. The key for robust behaviour is the integration of two different vision methods: one measures the 3D junctions with a stereo head, the other tracks the edge and junction features in a single image. To render robust and fast tracking, a model knowledge such as the feature topology, object side, and view dependent information is utilised. The pose calculation step then integrates the finding of both vision systems, detects outliers and sends the result to the robot. The real-time capability is important to reach an acceptable performance of the overall system. Presently a pose update cycle time of 120 ms has been achieved. Due to appearing jerks of the robot accelerometers were used for stabilization. Experiments show that our approach is feasible and meets the required positioning accuracies.


international conference on computer vision systems | 2006

Contextual Coordination in a Cognitive Vision System for Symbolic Activity Interpretation

Markus Vincze; Wolfgang Ponweiser; Michael Zillich

In this paper we present a vision system that gives a natural language interpretation of activities where a person handles objects. The system integrates low-level image components such as hand and object tracking, detection and recognition, posture and gesture recognition, with highlevel processes such as spatio-temporal object relationship generation and activity reasoning. To achieve near realtime operation, a task-oriented approach focuses processing depending on the situation context. Vision components are dynamically coordinated and their interrelation and region of interest continuously adapted to the interpretation task. The task of putting a CD in a CD-player with one or two hands demonstrates the operation of the system.

Collaboration


Dive into the Wolfgang Ponweiser's collaboration.

Top Co-Authors

Avatar

Markus Vincze

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Zillich

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Minu Ayromlou

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tobias Wagner

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Antonios Gasteratos

Democritus University of Thrace

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dietmar Legenstein

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Chroust

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge