Ronald P. A. Petrick
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ronald P. A. Petrick.
Robotics and Autonomous Systems | 2011
Norbert Krüger; Christopher W. Geib; Justus H. Piater; Ronald P. A. Petrick; Mark Steedman; Florentin Wörgötter; Ales Ude; Tamim Asfour; Dirk Kraft; Damir Omrcen; Alejandro Agostini; Rüdiger Dillmann
Abstract This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive robots, and enumerates a number of critical learning problems in terms of OACs.
international conference on multimodal interfaces | 2012
Mary Ellen Foster; Andre Gaschler; Manuel Giuliani; Amy Isard; Maria Pateraki; Ronald P. A. Petrick
We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the bartender in a range of social situations. Most customers successfully obtained a drink from the bartender in all scenarios, and the factors that had the greatest impact on subjective satisfaction were task success and dialogue efficiency.
international conference on multimodal interfaces | 2013
Manuel Giuliani; Ronald P. A. Petrick; Mary Ellen Foster; Andre Gaschler; Amy Isard; Maria Pateraki; Markos Sigalas
We address the question of whether service robots that interact with humans in public spaces must express socially appropriate behaviour. To do so, we implemented a robot bartender which is able to take drink orders from humans and serve drinks to them. By using a high-level automated planner, we explore two different robot interaction styles: in the task only setting, the robot simply fulfils its goal of asking customers for drink orders and serving them drinks; in the socially intelligent setting, the robot additionally acts in a manner socially appropriate to the bartender scenario, based on the behaviour of humans observed in natural bar interactions. The results of a user study show that the interactions with the socially intelligent robot were somewhat more efficient, but the two implemented behaviour settings had only a small influence on the subjective ratings. However, there were objective factors that influenced participant ratings: the overall duration of the interaction had a positive influence on the ratings, while the number of system order requests had a negative influence. We also found a cultural difference: German participants gave the system higher pre-test ratings than participants who interacted in English, although the post-test scores were similar.
computational intelligence | 2011
Alexander Koller; Ronald P. A. Petrick
Natural language generation (NLG) is a major subfield of computational linguistics with a long tradition as an application area of automated planning systems. While current mainstream approaches have largely ignored the planning approach to NLG, several recent publications have sparked a renewed interest in this area. In this article, we investigate the extent to which these new NLG approaches profit from the advances in planner expressiveness and efficiency. Our findings are mixed. While modern planners can readily handle the search problems that arise in our NLG experiments, their overall runtime is often dominated by the grounding step they perform as preprocessing. Furthermore, small changes in the structure of a domain can significantly shift the balance between search and preprocessing. Overall, our experiments show that the off‐the‐shelf planners we tested are unusably slow for nontrivial NLG problem instances. As a result, we offer our domains and experiences as challenges for the planning community.
intelligent robots and systems | 2013
Andre Gaschler; Ronald P. A. Petrick; Manuel Giuliani; Markus Rickert; Alois Knoll
Robot task planning is an inherently challenging problem, as it covers both continuous-space geometric reasoning about robot motion and perception, as well as purely symbolic knowledge about actions and objects. This paper presents a novel “knowledge of volumes” framework for solving generic robot tasks in partially known environments. In particular, this approach (abbreviated, KVP) combines the power of symbolic, knowledge-level AI planning with the efficient computation of volumes, which serve as an intermediate representation for both robot action and perception. While we demonstrate the effectiveness of our framework in a bimanual robot bartender scenario, our approach is also more generally applicable to tasks in automation and mobile manipulation, involving arbitrary numbers of manipulators.
international conference on robotics and automation | 2014
Peter Kaiser; Mike Lewis; Ronald P. A. Petrick; Tamim Asfour; Mark Steedman
Autonomous robots often require domain knowledge to act intelligently in their environment. This is particularly true for robots that use automated planning techniques, which require symbolic representations of the operating environment and the robots capabilities. However, the task of specifying domain knowledge by hand is tedious and prone to error. As a result, we aim to automate the process of acquiring general common sense knowledge of objects, relations, and actions, by extracting such information from large amounts of natural language text, written by humans for human readers. We present two methods for knowledge acquisition, requiring only limited human input, which focus on the inference of spatial relations from text. Although our approach is applicable to a range of domains and information, we only consider one type of knowledge here, namely object locations in a kitchen environment. As a proof of concept, we test our approach using an automated planner and show how the addition of common sense knowledge can improve the quality of the generated plans.
international conference on robotics and automation | 2015
Andre Gaschler; Ingmar Kessler; Ronald P. A. Petrick; Alois Knoll
For robots to solve hard tasks in real-world manufacturing and service contexts, they need to reason about both symbolic and geometric preconditions, and the effects of complex actions. We use an existing Knowledge of Volumes approach to robot task planning (KVP), which facilitates hybrid planning with symbolic actions and continuous-valued robot and object motion, and make two important additions to this approach: (i) new geometric predicates are added for complex object manipulation planning, and (ii) all geometric queries-such as collision and inclusion of objects and swept volumes-are implemented with a single-sided, bounded approximation, which calculates efficient and safe robot motion paths. Our task planning framework is evaluated in multiple scenarios, using concise and generic scenario definitions.
Proceedings of the IEEE | 2016
Volker Krueger; Arnaud Chazoule; Matthew Crosby; Antoine Lasnier; Mikkel Rath Pedersen; Francesco Rovida; Lazaros Nalpantidis; Ronald P. A. Petrick; Cesar Toscano; Germano Veiga
Cognitive robots, able to adapt their actions based on sensory information and the management of uncertainty, have begun to find their way into manufacturing settings. However, the full potential of these robots has not been fully exploited, largely due to the lack of vertical integration with existing IT infrastructures, such as the manufacturing execution system (MES), as part of a large-scale cyber-physical entity. This paper reports on considerations and findings from the research project STAMINA that is developing such a cognitive cyber-physical system and applying it to a concrete and well-known use case from the automotive industry. Our approach allows manufacturing tasks to be performed without human intervention, even if the available description of the environment-the world model-suffers from large uncertainties. Thus, the robot becomes an integral part of the MES, resulting in a highly flexible overall system.
Archive | 2016
Mary Ellen Foster; Ronald P. A. Petrick
The first € price and the £ and
Journal of Artificial Intelligence Research | 2018
Andre Gaschler; Ronald P. A. Petrick; Oussama Khatib; Alois Knoll
price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. K. Jokinen, G. Wilcock (Eds.) Dialogues with Social RobotsNumerous toolkits are available for developing speech-based dialogue systems. Many of these toolkits include not only a method for representing states and actions, but also a mechanism for reasoning and selecting the actions, often combined with a technical framework designed to simplify the task of creating end-to-end systems. This tight coupling of representation, reasoning, and implementation makes it difficult both to compare different approaches, as well as to analyse the properties of individual techniques. We contrast this situation with the state of the art in a related research area---AI planning---where a set of common representations have been defined and are widely used to enable direct comparison of different reasoning approaches. We argue that adopting a similar separation would greatly benefit the dialogue research community.