Minu Ayromlou
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Minu Ayromlou.
The International Journal of Robotics Research | 2001
Markus Vincze; Minu Ayromlou; Wolfgang Ponweiser; Michael Zillich
A real-world limitation of visual servoing approaches is the sensitivity of visual tracking to varying ambient conditions and background clutter. The authors present a model-based vision framework to improve the robustness of edge-based feature tracking. Lines and ellipses are tracked using edge-projected integration of cues (EPIC). EPIC uses cues in regions delineated by edges that are defined by observed edgels and a priori knowledge from a wire-frame model of the object. The edgels are then used for a robust fit of the feature geometry, but at times this results in multiple feature candidates. A final validation step uses the model topology to select the most likely feature candidates. EPIC is suited for real-time operation. Experiments demonstrate operation at frame rate. Navigating a walking robot through an industrial environment shows the robustness to varying lighting conditions. Tracking objects over varying backgrounds indicates robustness to clutter.
IEEE Robotics & Automation Magazine | 2005
Markus Vincze; Matthias J. Schlemmer; Peter Gemeiner; Minu Ayromlou
Vision for Robotics (V4R) is a software package for tracking rigid objects in unknown surroundings. Its output is the 3-D pose of the target object, which can be further used as an input to control, e.g., the end effector of a robot. The major goals are tracking at camera frame rate and robustness. The latter is achieved by performing cue integration in order to compensate for weaknesses of individual cues. Therefore, features such as lines and ellipses are not only extracted from 2-D images, but the 3-D model and the pose of the object are exploited also.
international conference on computer vision systems | 1999
Markus Vincze; Minu Ayromlou; Wilfried Kubinger
Vision-based control of motion can only be feasible if vision provides reliable control signals and when full system integration is achieved. In this paper we will address these two issues. A modular system architecture is built up around the basic primitives of object tracking, the features of the object. The initialisation is partly automated by using search functions to describe the task. The features found and tracked in the image are contained in a wire-frame model of the object as seen in the image. This model is used for feature tracking and continuous pose determination. Of particular need is a method of robust feature tracking. This is achieved using EPIC, a method of Edge-Projected Integration of Cues. A demonstration shows how the robot follows the pose of an object moved by hand in common room lighting at frame rate using a PC.
international conference on computer vision systems | 2001
Markus Vincze; Minu Ayromlou; Carlos Beltran; Antonios Gasteratos; Simon Hoffgaard; Ole Madsen; Wolfgang Ponweiser; Michael Zillich
A prototype system has been built to navigate a walking robot into a ship structure. The robot is equipped with a stereo head for monocular and stereo vision. From the CAD-model of the ship good viewpoints are selected such that the head can look at locations with sufficient features. The edge features for the views are extracted automatically. The pose of the robot is estimated from the features detected by two vision approaches. One approach searches in the full image for junctions and uses the stereo information to extract 3D information. The other method is monocular and tracks 2D edge features. To achieve robust tracking of the features a model-based tracking approach is enhanced with a method of Edge Projected Integration of Cues (EPIC). EPIC uses object knowledge to select the correct features in real-time. The two vision systems are synchronised by sending the images over a fibre channel network. The pose estimation uses both the 2D and 3D features and locates the robot within a few centimetres over the range of ship cells of several metres. Gyros are used to stabilise the head while the robot moves. The system has been developed within the RobVision project and the results of the final demonstration are given.
international conference on multisensor fusion and integration for intelligent systems | 2001
Wolfgang Ponweiser; Minu Ayromlou; Markus Vincze; C. Beltran; Ole Madsen; Antonios Gasteratos
This paper introduces the system, developed during the Esprit project RobVision (robust vision for sensing in industrial operations and needs), that navigates a climbing robot through a ship section for inspection and welding tasks. The basic idea is to continuously generate robot position and orientation (pose) signals by matching the visual sensing information from the environment with predetermined CAD-information. The key for robust behaviour is the integration of two different vision methods: one measures the 3D junctions with a stereo head, the other tracks the edge and junction features in a single image. To render robust and fast tracking, a model knowledge such as the feature topology, object side, and view dependent information is utilised. The pose calculation step then integrates the finding of both vision systems, detects outliers and sends the result to the robot. The real-time capability is important to reach an acceptable performance of the overall system. Presently a pose update cycle time of 120 ms has been achieved. Due to appearing jerks of the robot accelerometers were used for stabilization. Experiments show that our approach is feasible and meets the required positioning accuracies.
international conference on pattern recognition | 2000
Markus Vincze; Minu Ayromlou; Michael Zillich
Commercial applications of ellipse tracking require robustness and real-time capability. The method presented tracks ellipses at field rate using a Pentium PC. Robustness is obtained by integrating gradient and intensity values for the detection of contour edges and by using a RANSAC-like method to find the most likely ellipse. The method adapts to the appearance along the ellipse circumference and effectively separates object from background. Experiments document the capabilities of the approach with real-world examples.
machine vision applications | 2003
Markus Vincze; Minu Ayromlou; Carlos Beltran; Antonios Gasteratos; Simon Hoffgaard; Ole Madsen; Wolfgang Ponweiser; Michael Zillich
Abstract. A prototype system has been built to navigate a walking robot into a ship structure. The 8-legged robot is equipped with an active stereo head. From the CAD-model of the ship good view points are selected, such that the head can look at locations with sufficient edge features, which are extracted automatically for each view. The pose of the robot is estimated from the features detected by two vision approaches. One approach searches in stereo images for junctions and measures the 3-D position. The other method uses monocular image and tracks 2-D edge features. Robust tracking is achieved with a method of edge projected integration of cues (EPIC). Two inclinometres are used to stabilise the head while the robot moves. The results of the final demonstration to navigate the robot within centimetre accuracy are given.
international conference on image analysis and processing | 1999
Markus Vincze; Minu Ayromlou; Wilfried Kubinger
This paper reports new techniques to render real-time image-based tracking methods more robust to enable the control of robots in an arbitrary 3D indoor environment. The methods enable robust feature tracking by integrating several cues within tracking windows. Integration is achieved using EPIC, a method of edge-projected integration of cues at field rate using a common Pentium PC. The thorough analysis of the control loop indicates this need to operate at field rate to obtain optimal dynamic performance. The demonstration verifies this and shows the application of following in 6D a grey part over grey metal background using common room lighting.
international conference on pattern recognition | 2002
Minu Ayromlou; Markus Vincze; Wolfgang Ponweiser
Background clutter produces a difficult problem for edge matching within model-based object tracking approaches. The solution of matching all possible candidate image features with the model features is computationally infeasible for real-time tracking. The authors propose to draw probabilistic samples of candidate sets based on measures for local topological constraints. Line features have parallel and junction constraints. Continuous measures are used for evaluation of matching of the feature sets to avoid thresholds. This approach limits the number of matchings and processing time increases linearly with the number of features. Experiments show the correct selection among multiple candidates for different scenarios.
Revised Papers from the International Workshop on Sensor Based Intelligent Robots | 2000
Markus Vincze; Minu Ayromlou; Stefan Chroust; Michael Zillich; Wolfgang Ponweiser; Dietmar Legenstein
Vision-based control needs fast and robust tracking. The conditions for fast tracking are derived from studying the dynamics of the visual servoing loop. The result indicates how to build the vision system to obtain high dynamic performance of tracking. Maximum tracking velocity is obtained when running image acquisition and processing in parallel and using appropriately sized tracking windows. To achieve the second criteria, robust tracking, a model-based tracking approach is enhanced with a method of Edge Projected Integration of Cues (EPIC). EPIC uses object knowledge to select the correct feature in real-time. The object pose is calculated from the features at every tracking cycle. The components of the tracking system have been implemented in a framework called Vision for Robotics (V4R). V4R has been used within the EU-funded project RobVision to navigate a robot into a ship section using the model data from the CAD-design. The experiments show the performance of tracking in different parts of the ship mock-up.