Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Ulrich is active.

Publication


Featured researches published by Markus Ulrich.


international conference on robotics and automation | 2009

CAD-based recognition of 3D objects in monocular images

Markus Ulrich; Christian Wiedemann; Carsten Steger

This paper provides a method for recognizing 3D objects in a single camera image and for determining their 3D poses.


Pattern Recognition | 2003

Real-time object recognition using a modified generalized Hough transform

Markus Ulrich; Carsten Steger; Albert Baumgartner

A technique for real-time object recognition in digital images is described. On the one hand, our approach combines robustness against occlusions, clutter, arbitrary illumination changes, and noise with invariance under rigid motion, i.e., translation and rotation. On the other hand, the computational effort is small in order to fulfill requirements of real-time applications. Our approach uses a modification of the generalized Hough transform (GHT) to improve the GHTs performance: A novel efficient limitation of the search space in combination with a hierarchical search strategy is implemented to reduce the computational effort. To meet the demands for high precision in industrial tasks, a subsequent refinement adjusts the final pose parameters. An empirical performance evaluation of the modified GHT is presented by comparing it to two standard 2D object recognition techniques.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Combining Scale-Space and Similarity-Based Aspect Graphs for Fast 3D Object Recognition

Markus Ulrich; Christian Wiedemann; Carsten Steger

This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the objects surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms.


joint pattern recognition symposium | 2008

Recognition and Tracking of 3D Objects

Christian Wiedemann; Markus Ulrich; Carsten Steger

This paper describes a method for recognizing and tracking 3D objects in a single camera image and for determining their 3D poses. A model is trained solely based on the geometry information of a 3D CAD model of the object. We do not rely on texture or reflectance information of the objects surface, making this approach useful for a wide range of object types and complementary to descriptor-based approaches. An exhaustive search, which ensures that the globally best matches are always found, is combined with an efficient hierarchical search, a high percentage of which can be computed offline, making our method suitable even for time-critical applications. The method is especially suited for, but not limited to, the recognition and tracking of untextured objects like metal parts, which are often used in industrial environments.


scandinavian conference on image analysis | 2017

Subpixel-Precise Tracking of Rigid Objects in Real-Time

Tobias Böttger; Markus Ulrich; Carsten Steger

We present a novel object tracking scheme that can track rigid objects in real time. The approach uses subpixel-precise image edges to track objects with high accuracy. It can determine the object position, scale, and rotation with subpixel-precision at around 80 fps. The tracker returns a reliable score for each frame and is capable of self diagnosing a tracking failure. Furthermore, the choice of the similarity measure makes the approach inherently robust against occlusion, clutter, and nonlinear illumination changes. We evaluate the method on sequences from rigid objects from the OTB-2015 and VOT2016 dataset and discuss its performance. The evaluation shows that the tracker is more accurate than state-of-the-art real-time trackers while being equally robust.


Pattern Recognition and Image Analysis | 2016

Hand-eye calibration of SCARA robots using dual quaternions

Markus Ulrich; Carsten Steger

In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the hand-eye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of Daniilid is 1999 (IJRR) for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.


european conference on computer vision | 2018

MVTec D2S: Densely Segmented Supermarket Dataset

Patrick Follmann; Tobias Böttger; Philipp Härtinger; Rebecca König; Markus Ulrich

We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21,000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.


Archive | 2007

Machine Vision Algorithms and Applications

Carsten Steger; Markus Ulrich; Christian Wiedemann


Archive | 2003

Hierarchical component based object recognition

Markus Ulrich; Carsten Steger


Archive | 2011

RECOGNITION AND POSE DETERMINATION OF 3D OBJECTS IN 3D SCENES

Bertram Drost; Markus Ulrich

Collaboration


Dive into the Markus Ulrich's collaboration.

Researchain Logo
Decentralizing Knowledge