Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Babette Dellen is active.

Publication


Featured researches published by Babette Dellen.


The International Journal of Robotics Research | 2011

Learning the semantics of object-action relations by observation

Eren Erdal Aksoy; Alexey Abramov; Johannes Dörr; KeJun Ning; Babette Dellen; Florentin Wörgötter

Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.


international conference on robotics and automation | 2011

3D modelling of leaves from color and ToF data for robotized plant measuring

Guillem Alenyà; Babette Dellen; Carme Torras

Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modelling for this application has strong demands, particularly in what concerns 3D information gathering and speed. This paper shows that Time-of-Flight (ToF) cameras achieve a good compromise between both demands, providing a suitable complement to color vision. A new method is proposed to segment plant images into their composite surface patches by combining hierarchical color segmentation with quadratic surface fitting using ToF depth data. Experimentation shows that the interpolated depth maps derived from the obtained surfaces fit well the original scenes. Moreover, candidate leaves to be approached by a measuring instrument are ranked, and then robot-mounted cameras move closer to them to validate their suitability to being sampled. Some ambiguities arising from leaves overlap or occlusions are cleared up in this way. The work is a proof-of-concept that dense color data combined with sparse depth as provided by a ToF camera yields a good enough 3D approximation for automated plant measuring at the high throughput imposed by the application.


international conference on robotics and automation | 2010

Categorizing object-action relations from semantic scene graphs

Eren Erdal Aksoy; Alexey Abramov; Florentin Wörgötter; Babette Dellen

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object-action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics.


workshop on applications of computer vision | 2012

Depth-supported real-time video segmentation with the Kinect

Alexey Abramov; Karl Pauwels; Jeremie Papon; Florentin Wörgötter; Babette Dellen

We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.


IEEE Robotics & Automation Magazine | 2013

Robotized Plant Probing: Leaf Segmentation Utilizing Time-of-Flight Data

Guillem Alenyà; Babette Dellen; Sergi Foix; Carme Torras

Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modeling for this application has strong demands, particularly in what concerns three-dimensional (3-D) information gathering and speed.


Computers and Electronics in Agriculture | 2015

Modeling leaf growth of rosette plants using infrared stereo image sequences

Eren Erdal Aksoy; Alexey Abramov; Florentin Wörgötter; Hanno Scharr; Andreas Fischbach; Babette Dellen

Display Omitted We introduce a novel method for finding and tracking multiple plant leaves.We can automatically measure relevant plant parameters (e.g. leaf growth rates).The procedure has three stages: preprocessing, leaf segmentation, and tracking.The method was tested on infrared tobacco-plant image sequences.The framework was used in a EU project Garnics as a robotic perception unit. In this paper, we present a novel multi-level procedure for finding and tracking leaves of a rosette plant, in our case up to 3 weeks old tobacco plants, during early growth from infrared-image sequences. This allows measuring important plant parameters, e.g. leaf growth rates, in an automatic and non-invasive manner. The procedure consists of three main stages: preprocessing, leaf segmentation, and leaf tracking. Leaf-shape models are applied to improve leaf segmentation, and further used for measuring leaf sizes and handling occlusions. Leaves typically grow radially away from the stem, a property that is exploited in our method, reducing the dimensionality of the tracking task. We successfully tested the method on infrared image sequences showing the growth of tobacco-plant seedlings up to an age of about 30days, which allows measuring relevant plant growth parameters such as leaf growth rate. By robustly fitting a suitably modified autocatalytic growth model to all growth curves from plants under the same treatment, average plant growth models could be derived. Future applications of the method include plant-growth monitoring for optimizing plant production in green houses or plant phenotyping for plant research.


Facing the multicore-challenge | 2010

Real-time image segmentation on a GPU

Alexey Abramov; Tomas Kulvicius; Florentin Wörgötter; Babette Dellen

Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit (GPU) architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture (CUDA). For images of 256 × 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Real-Time Segmentation of Stereo Videos on a Portable System With a Mobile GPU

Alexey Abramov; Karl Pauwels; Jeremie Papon; Florentin Wörgötter; Babette Dellen

In mobile robotic applications, visual information needs to be processed fast despite resource limitations of the mobile system. Here, a novel real-time framework for model-free spatiotemporal segmentation of stereo videos is presented. It combines real-time optical flow and stereo with image segmentation and runs on a portable system with an integrated mobile graphics processing unit. The system performs online, automatic, and dense segmentation of stereo videos and serves as a visual front end for preprocessing in mobile robots, providing a condensed representation of the scene that can potentially be utilized in various applications, e.g., object manipulation, manipulation recognition, visual servoing. The method was tested on real-world sequences with arbitrary motions, including videos acquired with a moving camera.


workshop on applications of computer vision | 2011

Segmenting color images into surface patches by exploiting sparse depth data

Babette Dellen; Guillem Alenyà; Sergi Foix; Carme Torras

We present a new method for segmenting color images into their composite surfaces by combining color segmentation with model-based fitting utilizing sparse depth data, acquired using time-of-flight (Swissranger, PMD CamCube) and stereo techniques. The main target of our work is the segmentation of plant structures, i.e., leaves, from color-depth images, and the extraction of color and 3D shape information for automating manipulation tasks. Since segmentation is performed in the dense color space, even sparse, incomplete, or noisy depth information can be used. This kind of data often represents a major challenge for methods operating in the 3D data space directly. To achieve our goal, we construct a three-stage segmentation hierarchy by segmenting the color image with different resolutions-assuming that “true” surface boundaries must appear at some point along the segmentation hierarchy. 3D surfaces are then fitted to the color-segment areas using depth data. Those segments which minimize the fitting error are selected and used to construct a new segmentation. Then, an additional region merging and a growing stage are applied to avoid over-segmentation and label previously unclustered points. Experimental results demonstrate that the method is successful in segmenting a variety of domestic objects and plants into quadratic surfaces. At the end of the procedure, the sparse depth data is completed using the extracted surface models, resulting in dense depth maps. For stereo, the resulting disparity maps are compared with ground truth and the average error is computed.


international conference on robotics and automation | 2017

Combining Semantic and Geometric Features for Object Class Segmentation of Indoor Scenes

Farzad Husain; Hannes Schulz; Babette Dellen; Carme Torras; Sven Behnke

Scene understanding is a necessary prerequisite for robots acting autonomously in complex environments. Low-cost RGB-D cameras such as Microsoft Kinect enabled new methods for analyzing indoor scenes and are now ubiquitously used in indoor robotics. We investigate strategies for efficient pixelwise object class labeling of indoor scenes that combine both pretrained semantic features transferred from a large color image dataset and geometric features, computed relative to the room structures, including a novel distance-from-wall feature, which encodes the proximity of scene points to a detected major wall of the room. We evaluate our approach on the popular NYU v2 dataset. Several deep learning models are tested, which are designed to exploit different characteristics of the data. This includes feature learning with two different pooling sizes. Our results indicate that combining semantic and geometric features yields significantly improved results for the task of object class segmentation.

Collaboration


Dive into the Babette Dellen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carme Torras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Alexey Abramov

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar

Farzad Husain

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Eren Erdal Aksoy

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Guillem Alenyà

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralf Wessel

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jeremie Papon

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar

Sergi Foix

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge