Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Jordt is active.

Publication


Featured researches published by Andreas Jordt.


human factors in computing systems | 2013

Flexpad: highly flexible bending interactions for projected handheld displays

Jürgen Steimle; Andreas Jordt; Pattie Maes

Flexpad is an interactive system that combines a depth camera and a projector to transform sheets of plain paper or foam into flexible, highly deformable, and spatially aware handheld displays. We present a novel approach for tracking deformed surfaces from depth images in real time. It captures deformations in high detail, is very robust to occlusions created by the users hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than in previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos. Results from two user studies show that our system is capable of detecting complex deformations and that users are able to perform them quickly and precisely.


International Journal of Computer Vision | 2013

Direct Model-Based Tracking of 3D Object Deformations in Depth and Color Video

Andreas Jordt; Reinhard Koch

The tracking of deformable objects using video data is a demanding research topic due to the inherent ambiguity problems, which can only be solved using additional assumptions about the deformation. Image feature points, commonly used to approach the deformation problem, only provide sparse information about the scene at hand. In this paper a tracking approach for deformable objects in color and depth video is introduced that does not rely on feature points or optical flow data but employs all the input image information available to find a suitable deformation for the data at hand. A versatile NURBS based deformation space is defined for arbitrary complex triangle meshes, decoupling the object surface complexity from the complexity of the deformation. An efficient optimization scheme is introduced that is able to calculate results in real-time (25 Hz). Extensive synthetic and real data tests of the algorithm and its features show the reliability of this approach.


IEEE Transactions on Automation Science and Engineering | 2014

An Adaptable Robot Vision System Performing Manipulation Actions With Flexible Objects

Leon Bodenhagen; Andreas Rune Fugl; Andreas Jordt; Morten Willatzen; Knud Aulkjær Andersen; Martin M. Olsen; Reinhard Koch; Henrik Gordon Petersen; Norbert Krüger

This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e.g, for solve path-planning problems, are often not applicable. It is therefore required to integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations as well as action learning techniques. All these different submodules have been integrated into a demonstration platform, operating in real-time. Simulations have been used to bootstrap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape. Note to Practitioners - The aim of this work was to facilitate the setup of robot-based automation of delicate handling of flexible objects consisting of a uniform material. As examples, we have considered how to optimally maneuver flexible objects through a hole without colliding and how to place flexible objects on a flat surface with minimal introduction of internal stresses in the object. Given the material properties of the object, we have demonstrated in these two applications how the system can be programmed with minimal requirements of human intervention. Rather than being an integrated system with the drawbacks in terms of lacking flexibility, our system should be viewed as a library of new technologies that have been proven to work in close to industrial conditions. As a rather basic, but necessary part, we provide a technology for determining the shape of the object when passing on, e.g., a conveyor belt prior to being handled. The main technologies applicable for the manipulated objects are: A method for real-time tracking of the flexible objects during manipulation, a method for model-based offline prediction of the static deformation of grasped, flexible objects and, finally, a method for optimizing specific tasks based on both simulated and real-world executions.


international conference on robotics and automation | 2009

Automatic high-precision self-calibration of camera-robot systems

Andreas Jordt; Nils T. Siebel; Gerald Sommer

In this article a new method is presented to obtain a full and precise calibration of camera-robot systems with eye-in-hand cameras. It achieves a simultaneous and numerically stable calibration of intrinsic and extrinsic camera parameters by analysing the image coordinates of a single point marker placed in the environment of the robot. The method works by first determining a rough initial estimate of the camera pose in the tool coordinate frame. This estimate is then used to generate a set of uniformly distributed calibration poses from which the object is visible. The measurements obtained in these poses are then used to obtain the exact parameters with CMA-ES (Covariance Matrix Adaptation Evolution Strategy), a derandomised variant of an evolution strategy optimiser. Minimal claims on the surrounding area and flexible handling of environmental and kinematical limitations make this method applicable to a range of robot setups and camera models. The algorithm runs autonomously without supervision and does not need manual adjustments. Our problem formulation is directly in the 3D space which helps in minimising the resulting calibration errors in the robots task space. Both simulations and experimental results with a real robot show a very good convergence and high repeatability of calibration results without requiring user-supplied initial estimates of the calibration parameters.


british machine vision conference | 2011

Fast Tracking of Deformable Objects in Depth and Colour Video.

Andreas Jordt; Reinhard Koch

One challenge in computer vision is the joint reconstruction of deforming objects from colour and depth videos. So far, a lot of research has focused on deformation reconstruction based on colour images only, but as range cameras like the recently released Kinect become more and more common, the incorporation of depth information becomes feasible. In this article, a new method is introduced to track object deformation in depth and colour image data. The tracking is done by translating, rotating, and deforming a prototype of an object such that it fits the depth and colour data best. The prototype can either be cut out from the first depth/colour frame of the input sequence or an already known textured geometry can be used. A NURBS [2] based deformation function allows to decouple the geometrical object complexity from the complexity of the deformation itself, providing a relatively low dimensional space to describe arbitrary ’realistic’ deformations. This is done by first approximating the object surface using a standard NURBS function N and then registering every object vertex to the surface as depicted in figure 1.


Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium | 2012

Simultaneous Estimation of Material Properties and Pose for Deformable Objects from Depth and Color Images

Andreas Rune Fugl; Andreas Jordt; Henrik Gordon Petersen; Morten Willatzen; Reinhard Koch

In this paper we consider the problem of estimating 6D pose, material properties and deformation of an object grasped by a robot gripper. To estimate the parameters we minimize an error function incorporating visual and physical correctness. Through simulated and real-world experiments we demonstrate that we are able to find realistic 6D poses and elasticity parameters like Young’s modulus. This makes it possible to perform subsequent manipulation tasks, where accurate modelling of the elastic behaviour is important.


human factors in computing systems | 2013

Flexpad: a highly flexible handheld display

Jürgen Steimle; Andreas Jordt; Pattie Maes

This video demonstrates Flexpad, a highly flexible display interface. Flexpad introduces a novel way of interacting with flexible displays by using detailed deformations. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images very robustly, in high detail and in real time. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos.


Time-of-Flight and Depth Imaging | 2013

Reconstruction of Deformation from Depth and Color Video with Explicit Noise Models

Andreas Jordt; Reinhard Koch

Depth sensors like ToF cameras and structured light devices provide valuable scene information, but do not provide a stable base for optical flow or feature movement calculation because the lack of texture information makes depth image registration very challenging. Approaches associating depth values with optical flow or feature movement from color images try to circumvent this problem, but suffer from the fact that color features are often generated at edges and depth discontinuities, areas in which depth sensors inherently deliver unstable data. Using deformation tracking as an application, this article will discuss the benefits of Analysis by Synthesis (AbS) while approaching the tracking problem and how it can be used to:


international conference on intelligent robotics and applications | 2011

An outline for an intelligent system performing peg-in-hole actions with flexible objects

Andreas Jordt; Andreas Rune Fugl; Leon Bodenhagen; Morten Willatzen; Reinhard Koch; Henrik Gordon Petersen; Knud Aulkjær Andersen; Martin M. Olsen; Norbert Krüger

We describe the outline of an adaptable system which is able to perform grasping and peg-in-hole actions with flexible objects. The system makes use of visual tracking and shape reconstruction, physical modeling of flexible material and learning based on a kernel density approach. We show results for the different sub-modules in simulation as well as real world data.


dagm conference on pattern recognition | 2010

High-resolution object deformation reconstruction with active range camera

Andreas Jordt; Ingo Schiller; Johannes Bruenger; Reinhard Koch

This contribution discusses the 3D reconstruction of deformable freeform surfaces with high spatial and temporal resolution. These are conflicting requirements, since high-resolution surface scanners typically cannot achieve high temporal resolution, while high-speed range cameras like the Time-of-Flight (ToF) cameras capture depth at 25 fps but have a limited spatial resolution. We propose to combine a high-resolution surface scan with a ToF-camera and a color camera to achieve both requirements. The 3D surface deformation is modeled by a NURBS surface that approximates the object surface and estimates the 3D object motion and local 3D deformation from the ToF and color camera data. A set of few NURBS control points can faithfully model the motion and deformation and will be estimated from the ToF and color data with high accuracy. The contribution will focus on the estimation of the 3D deformation NURBS from the ToF and color data.

Collaboration


Dive into the Andreas Jordt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Rune Fugl

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Henrik Gordon Petersen

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Morten Willatzen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin M. Olsen

University of Southern Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge