Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ferenc Balint-Benczedi is active.

Publication


Featured researches published by Ferenc Balint-Benczedi.


international conference on robotics and automation | 2015

RoboSherlock: Unstructured information processing for robot perception

Michael Beetz; Ferenc Balint-Benczedi; Nico Blodow; Daniel Nyga; Thiemo Wiedemeyer; Zoltan-Csaba Marton

We present RoboSherlock, an open source software framework for implementing perception systems for robots performing human-scale everyday manipulation tasks. In RoboSherlock, perception and interpretation of realistic scenes is formulated as an unstructured information management (UIM) problem. The application of the UIM principle supports the implementation of perception systems that can answer task-relevant queries about objects in a scene, boost object recognition performance by combining the strengths of multiple perception algorithms, support knowledge-enabled reasoning about objects and enable automatic and knowledge-driven generation of processing pipelines. We demonstrate the potential of the proposed framework by three feasibility studies of systems for real-world scene perception that have been built on top of RoboSherlock.


international conference on robotics and automation | 2014

PR2 Looking at Things: Ensemble Learning for Unstructured Information Processing with Markov Logic Networks

Daniel Nyga; Ferenc Balint-Benczedi; Michael Beetz

We investigate the perception and reasoning task of answering queries about realistic scenes with objects of daily use perceived by a robot. A key problem implied by the task is the variety of perceivable properties of objects, such as their shape, texture, color, size, text pieces and logos, that go beyond the capabilities of individual state-of-the-art perception methods. A promising alternative is to employ combinations of more specialized perception methods. In this paper we propose a novel combination method, which structures perception in a two-step process, and apply this method in our object perception system. In a first step, specialized methods annotate detected object hypotheses with symbolic information pieces. In the second step, the given query Q is answered by inferring the conditional probability P(Q | E), where E are the symbolic information pieces considered as evidence for the conditional probability. In this setting Q and E are part of a probabilistic model of scenes, objects and their annotations, which the perception method has beforehand learned a joint probability distribution of. Our proposed method has substantial advantages over alternative methods in terms of the generality of queries that can be answered, the generation of information that can actively guide perception, the ease of extension, the possibility of including additional kinds of evidences, and its potential for the realization of self-improving and - specializing perception systems. We show for object categorization, which is a subclass of the probabilistic inferences, that impressive categorization performance can be achieved combining the employed expert perception methods in a synergistic manner.


international conference on robotics and automation | 2013

Tracking-based interactive segmentation of textureless objects

Karol Hausman; Ferenc Balint-Benczedi; Dejan Pangercic; Zoltan-Csaba Marton; Ryohei Ueda; Kei Okada; Michael Beetz

This paper describes a textureless object segmentation approach for autonomous service robots acting in human living environments. The proposed system allows a robot to effectively segment textureless objects in cluttered scenes by leveraging its manipulation capabilities. In our pipeline, the cluttered scenes are first statically segmented using state-of-the-art classification algorithm and then the interactive segmentation is deployed in order to resolve this possibly ambiguous static segmentation. In the second step the RGBD (RGB + Depth) sparse features, estimated on the RGBD point cloud from the Kinect sensor, are extracted and tracked while motion is induced into a scene. Using the resulting feature poses, the features are then assigned to their corresponding objects by means of a graph-based clustering algorithm. In the final step, we reconstruct the dense models of the objects from the previously clustered sparse RGBD features. We evaluated the approach on a set of scenes which consist of various textureless flat (e.g. box-like) and round (e.g. cylinder-like) objects and the combinations thereof.


intelligent robots and systems | 2013

Decomposing CAD models of objects of daily use and reasoning about their functional parts

Moritz Tenorth; Stefan Profanter; Ferenc Balint-Benczedi; Michael Beetz

Todays robots are still lacking comprehensive knowledge bases about objects and their properties. Yet, a lot of knowledge is required when performing manipulation tasks to identify abstract concepts like a “handle” or the “blade of a spatula” and to ground them into concrete coordinate frames that can be used to parametrize the robots actions. In this paper, we present a system that enables robots to use CAD models of objects as a knowledge source and to perform logical inference about object components that have automatically been identified in these models. The system includes several algorithms for mesh segmentation and geometric primitive fitting which are integrated into the robots knowledge base as procedural attachments to the semantic representation. Bottom-up segmentation methods are complemented by top-down, knowledge-based analysis of the identified components. The evaluation on a diverse set of object models, downloaded from the Internet, shows that the algorithms are able to reliably detect several kinds of object parts.


Pattern Recognition Letters | 2013

Ensembles of strong learners for multi-cue classification

Zoltan-Csaba Marton; Florian Seidel; Ferenc Balint-Benczedi; Michael Beetz

Real world heterogeneous scenes contain objects of a large variety of forms, surfaces, colors and textures, thus multi-modal approaches are needed to deal with their challenges. A promising method of combining various sources of information are ensemble methods which allow on the fly integration of classification modules, specific to a single sensor modality, into a classification process. These modular and extensible approaches have the advantage that they do not require that a single method copes with every eventuality, but combine existing specialized methods to overcome their weaknesses. In addition, the rapid growth of the perception field means that comparing, evaluating, sharing and combining the available approaches becomes increasingly relevant. In this article we describe a novel training strategy for ensembles of strong learners that not only outperform the best member but also the best classifier trained on the concatenation of features. The method was evaluated using a large RGBD dataset containing Kinect scans of 300 objects and special use-cases are presented that highlight how ensemble learning can be used to improve classification results.


intelligent robots and systems | 2015

Robotic agents capable of natural and safe physical interaction with human co-workers

Michael Beetz; Georg Bartels; Alin Albu-Schäffer; Ferenc Balint-Benczedi; Rico Belder; Daniel Bebler; Sami Haddadin; Alexis Maldonado; Nico Mansfeld; Thiemo Wiedemeyer; Roman Weitschat; Jan-Hendrik Worch

Many future application scenarios of robotics envision robotic agents to be in close physical interaction with humans: On the factory floor, robotic agents shall support their human co-workers with the dull and health threatening parts of their jobs. In their homes, robotic agents shall enable people to stay independent, even if they have disabilities that require physical help in their daily life - a pressing need for our aging societies. A key requirement for such robotic agents is that they are safety-aware, that is, that they know when actions may hurt or threaten humans and actively refrain from performing them. Safe robot control systems are a current research focus in control theory. The control system designs, however, are a bit paranoid: programmers build “software fences” around people, effectively preventing physical interactions. To physically interact in a competent manner robotic agents have to reason about the task context, the human, and her intentions. In this paper, we propose to extend cognition-enabled robot control by introducing humans, physical interaction events, and safe movements as first class objects into the plan language. We show the power of the safety-aware control approach in a real-world scenario with a leading-edge autonomous manipulation platform. Finally, we share our experimental recordings through an online knowledge processing system, and invite the reader to explore the data with queries based on the concepts discussed in this paper.


international conference spatial cognition | 2012

Object categorization in clutter using additive features and hashing of part-graph descriptors

Zoltan-Csaba Marton; Ferenc Balint-Benczedi; Florian Seidel; Lucian Cosmin Goron; Michael Beetz

Detecting objects in clutter is an important capability for a household robot executing pick and place tasks in realistic settings. While approaches from 2D vision work reasonably well under certain lighting conditions and given unique textures, the development of inexpensive RGBD cameras opens the way for real-time geometric approaches that do not require templates of known objects. This paper presents a part-graph-based hashing method for classifying objects in clutter, using an additive feature descriptor. The method is incremental, allowing easy addition of new training data without recreating the complete model, and takes advantage of the additive nature of the feature to increase efficiency. It is based on a graph representation of the scene created from considering possible groupings of over-segmented scene parts, which can in turn be used in classification. Additionally, the results over multiple segmentations can be accumulated to increase detection accuracy. We evaluated our approach on a large RGBD dataset containing over 15000 Kinect scans of 102 objects grouped in 16 categories, which we arranged into six geometric classes. Furthermore, tests on complete cluttered scenes were performed as well, and used to showcase the importance of domain adaptation.


Journal of Intelligent and Robotic Systems | 2014

Part-Based Geometric Categorization and Object Reconstruction in Cluttered Table-Top Scenes

Zoltan-Csaba Marton; Ferenc Balint-Benczedi; Oscar Martinez Mozos; Nico Blodow; Asako Kanezaki; Lucian Cosmin Goron; Dejan Pangercic; Michael Beetz

This paper presents an approach for 3D geometry-based object categorization in cluttered table-top scenes. In our method, objects are decomposed into different geometric parts whose spatial arrangement is represented by a graph. The matching and searching of graphs representing the objects is sped up by using a hash table which contains possible spatial configurations of the different parts that constitute the objects. Additive feature descriptors are used to label partially or completely visible object parts. In this work we categorize objects into five geometric shapes: sphere, box, flat, cylindrical, and disk/plate, as these shapes represent the majority of objects found on tables in typical households. Moreover, we reconstruct complete 3D models that include the invisible back-sides of objects as well, in order to facilitate manipulation by domestic service robots. Finally, we present an extensive set of experiments on point clouds of objects using an RGBD camera, and our results highlight the improvements over previous methods.


intelligent robots and systems | 2015

Towards robots conducting chemical experiments

Gheorghe Lisca; Daniel Nyga; Ferenc Balint-Benczedi; Hagen Langer; Michael Beetz

Autonomous mobile robots are employed to perform increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language (NL) instructions, how to choose the control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure.


international conference on robotics and automation | 2016

Open robotics research using web-based knowledge services

Michael Beetz; Daniel Bebler; Jan Winkler; Jan-Hendrik Worch; Ferenc Balint-Benczedi; Georg Bartels; Aude Billard; Asil Kaan Bozcuoglu; Zhou Fang; Nadia Figueroa; Andrei Haidu; Hagen Langer; Alexis Maldonado; Ana Lucia Pais Ureche; Moritz Tenorth; Thiemo Wiedemeyer

In this paper we discuss how the combination of modern technologies in “big data” storage and management, knowledge representation and processing, cloud-based computation, and web technology can help the robotics community to establish and strengthen an open research discipline. We describe how we made the demonstrator of a EU project review openly available to the research community. Specifically, we recorded episodic memories with rich semantic annotations during a pizza preparation experiment in autonomous robot manipulation. Afterwards, we released them as an open knowledge base using the cloud- and web-based robot knowledge service OPENEASE. We discuss several ways on how this open data can be used to validate our experimental reports and to tackle novel challenging research problems.

Collaboration


Dive into the Ferenc Balint-Benczedi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Birk

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge