Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Nyga is active.

Publication


Featured researches published by Daniel Nyga.


international conference on robotics and automation | 2010

Understanding and executing instructions for everyday manipulation tasks from the World Wide Web

Moritz Tenorth; Daniel Nyga; Michael Beetz

Service robots will have to accomplish more and more complex, open-ended tasks and regularly acquire new skills. In this work, we propose a new approach to the problem of generating plans for such household robots. Instead composing them from atomic actions — the common approach in robot planning — we propose to transform task descriptions on web sites like ehow.com into executable robot plans. We present methods for automatically converting the instructions from natural language into a formal, logic-based representation, for resolving the word senses using the WordNet database and the Cyc ontology, and for exporting the generated plans into the mobile robots plan language RPL. We discuss the problem of inferring information that is missing in these descriptions and the problem of grounding the abstract task descriptions in the perception and action system, and we propose techniques for solving them. The whole system works autonomously without human interaction. It has successfully been tested with a set of about 150 natural language directives, of which up to 80% could be correctly transformed.


international conference on robotics and automation | 2015

RoboSherlock: Unstructured information processing for robot perception

Michael Beetz; Ferenc Balint-Benczedi; Nico Blodow; Daniel Nyga; Thiemo Wiedemeyer; Zoltan-Csaba Marton

We present RoboSherlock, an open source software framework for implementing perception systems for robots performing human-scale everyday manipulation tasks. In RoboSherlock, perception and interpretation of realistic scenes is formulated as an unstructured information management (UIM) problem. The application of the UIM principle supports the implementation of perception systems that can answer task-relevant queries about objects in a scene, boost object recognition performance by combining the strengths of multiple perception algorithms, support knowledge-enabled reasoning about objects and enable automatic and knowledge-driven generation of processing pipelines. We demonstrate the potential of the proposed framework by three feasibility studies of systems for real-world scene perception that have been built on top of RoboSherlock.


intelligent robots and systems | 2012

Everything robots always wanted to know about housework (but were afraid to ask)

Daniel Nyga; Michael Beetz

In this paper we discuss the problem of action-specific knowledge processing, representation and acquisition by autonomous robots performing everyday activities. We report on a thorough analysis of the household domain, which has been performed on a large corpus of natural-language instructions from the Web and underlines the supreme need of action-specific knowledge for robots acting in those environments. We introduce the concept of Probabilistic Robot Action Cores (PRAC) that are well-suited for encoding such knowledge in a probabilistic first-order knowledge base. We additionally show how such a knowledge base can be acquired by natural language and we address the problems of incompleteness, underspecification and ambiguity of naturalistic action specifications and point out how PRAC models can tackle those.


international conference on robotics and automation | 2014

PR2 Looking at Things: Ensemble Learning for Unstructured Information Processing with Markov Logic Networks

Daniel Nyga; Ferenc Balint-Benczedi; Michael Beetz

We investigate the perception and reasoning task of answering queries about realistic scenes with objects of daily use perceived by a robot. A key problem implied by the task is the variety of perceivable properties of objects, such as their shape, texture, color, size, text pieces and logos, that go beyond the capabilities of individual state-of-the-art perception methods. A promising alternative is to employ combinations of more specialized perception methods. In this paper we propose a novel combination method, which structures perception in a two-step process, and apply this method in our object perception system. In a first step, specialized methods annotate detected object hypotheses with symbolic information pieces. In the second step, the given query Q is answered by inferring the conditional probability P(Q | E), where E are the symbolic information pieces considered as evidence for the conditional probability. In this setting Q and E are part of a probabilistic model of scenes, objects and their annotations, which the perception method has beforehand learned a joint probability distribution of. Our proposed method has substantial advantages over alternative methods in terms of the generality of queries that can be answered, the generation of information that can actively guide perception, the ease of extension, the possibility of including additional kinds of evidences, and its potential for the realization of self-improving and - specializing perception systems. We show for object categorization, which is a subclass of the probabilistic inferences, that impressive categorization performance can be achieved combining the employed expert perception methods in a synergistic manner.


international conference on robotics and automation | 2011

How-models of human reaching movements in the context of everyday manipulation activities

Daniel Nyga; Moritz Tenorth; Michael Beetz

We present a system for learning models of human reaching trajectories in the context of everyday manipulation activities. Different kinds of trajectories are automatically discovered, and each of them is described by its semantic context. In a first step, the system clusters trajectories in observations of human everyday activities based on their shapes, and then learns the relation between these trajectories and the contexts in which they are used. The resulting models can be used for robots to select a trajectory to use in a given context. They can also serve as powerful prediction models for human motions to improve human-robot interaction. Experiments on the TUM kitchen data set show that the method is capable of discovering meaningful clusters in real-world observations of everyday activities like setting a table.


international symposium on robotics | 2018

Cloud-based Probabilistic Knowledge Services for Instruction Interpretation

Daniel Nyga; Michael Beetz

As the tasks of autonomous manipulation robots get more complex, the tasking of the robots using natural-language instructions becomes more important. Executing such instructions in the way they are meant often requires robots to infer missing, and disambiguate given information using lots of common and commonsense knowledge. In this work, we report on Probabilistic Action Cores (Prac) (Nyga and Beetz, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012) – a framework for learning of and reasoning about action-specific probabilistic knowledge bases that can be learned from hand-labeled instructions to address this problem. In Prac, knowledge about actions and objects is compactly represented by first-order probabilistic models, which are used to learn a joint probability distribution over the ways in which instructions for a given action verb are formulated. These joint probability distributions are then used to compute the plan instantiation that has the highest probability of producing the intended action given the natural language instruction. Formulating plan interpretation as a conditional probability is a promising approach because we can at the same time infer the plan that is most appropriate for performing the instruction, the refinement of the parameters of the plan on the basis of the information given in the instruction, and automatically fill in missing parameters by inferring their most probable value from the distribution. Prac has been implemented as a web-based online service on the cloud-robotics platform openEASE [7].


intelligent robots and systems | 2015

Towards robots conducting chemical experiments

Gheorghe Lisca; Daniel Nyga; Ferenc Balint-Benczedi; Hagen Langer; Michael Beetz

Autonomous mobile robots are employed to perform increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language (NL) instructions, how to choose the control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure.


international conference on robotics and automation | 2014

Controlled Natural Languages for Language Generation in Artificial Cognition

Nicholas H. Kirk; Daniel Nyga; Michael Beetz

In this paper we discuss, within the context of artificial assistants performing everyday activities, a resolution method to disambiguate missing or not satisfactorily inferred action-specific information via explicit clarification. While arguing the lack of preexisting robot to human linguistic interaction methods, we introduce a novel use of Controlled Natural Languages (CNL) as means of output language and sentence construction for doubt verbalization. We additionally provide implemented working scenarios, state future possibilities and problems related to verbalization of technical cognition when making use of Controlled Natural Languages.


Archive | 2015

Planning Everyday Manipulation Tasks – Prediction-based Transformation of Structured Activity Descriptions

Michael Beetz; Hagen Langer; Daniel Nyga

The field of autonomous robot manipulation experiences tremendous progress: the cost of robot platforms is decreasing substantially, sensor technology and perceptual capabilities are advancing rapidly, and we see an increasing sophistication of control mechanisms for manipulators. Researchers have also recently implemented robots that autonomously perform challenging manipulation tasks, such as making pancakes, folding clothes, baking cookies, and cutting salad. These developments lead us to the next big challenge: the investigation of control systems for robotic agents, such as robot co-workers and assistants that are capable of mastering human-scale everyday manipulation tasks. Robots mastering everyday manipulation tasks will have to perform tasks as general as “clean up”, “set the table”, and “put the bottle away/on the table”. Although such tasks are vaguely formulated the persons stating them have detailed expectations of how the robot should perform them. We believe that an essential planning capability of robotic agents mastering everyday activity will be their capability to reason about and predictively transform incomplete and ambiguous descriptions of various aspects of manipulation activities: the objects to be manipulated, the tools to be used, the locations where objects can be manipulated from, the motions and the grasps to be performed, etc. Vague descriptions of tasks and activities are not only a key challenge for robot planning but also an opportunity for more flexibility, robustness, generality, and robustness of robot control systems.


international conference on robotics and automation | 2017

What no robot has seen before — Probabilistic interpretation of natural-language object descriptions

Daniel Nyga; Mareike Picklum; Michael Beetz

We investigate the task of recognizing objects of daily use in human environments purely based on object descriptions given in natural language. In particular, we present an approach to transform phrases stated in natural language that describe such objects by their visual appearance into formal, semantic representations of their perceptual characteristics, which in turn can be used in a robot perception system in order to identify objects that the robot has never encountered before. To this end, we learn probabilistic first-order knowledge bases from encyclopedic articles and online dictionaries, which contain textual descriptions of a vast amount of everyday objects. We demonstrate the applicability of the approach on a robotic system in a proof-of-concept evaluation on a selected set of object descriptions acquired from the internet.

Collaboration


Dive into the Daniel Nyga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge