Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy L. Wyatt is active.

Publication


Featured researches published by Jeremy L. Wyatt.


international conference on computer vision systems | 2008

Functional object class detection based on learned affordance cues

Michael Stark; Philipp Lies; Michael Zillich; Jeremy L. Wyatt; Bernt Schiele

Current approaches to visual object class detection mainly focus on the recognition of basic level categories, such as cars, motorbikes, mugs and bottles. Although these approaches have demonstrated impressive performance in terms of recognition, their restriction to these categories seems inadequate in the context of embodied, cognitive agents. Here, distinguishing objects according to functional aspects based on object affordances is important in order to enable manipulation of and interaction between physical objects and cognitive agent. In this paper, we propose a system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues). It spans the complete range from tutordriven acquisition of affordance cues, learning of corresponding object models, and detecting novel instances of functional object classes in real images.


Advanced Engineering Informatics | 2010

Engineering intelligent information-processing systems with CAST

Nick Hawes; Jeremy L. Wyatt

The CoSy Architecture Schema Toolkit (CAST) is a new software toolkit, and related processing paradigm, which supports the construction and exploration of information-processing architectures for intelligent systems such as robots. CAST eschews the standard point-to-point connectivity of traditional message-based software toolkits for robots, instead supporting the parallel refinement of representations on shared working memories. In this article we focus on the engineering-related aspects of CAST, including the challenges that had to be overcome in its creation, and how it allow us to design and build novel intelligent systems in flexible ways. We support our arguments with example drawn from recent engineering efforts dedicated to building two intelligent systems with similar architectures: the PlayMate system for table-top manipulation and the Explorer system for human-augmented mapping.


human-robot interaction | 2008

Crossmodal content binding in information-processing architectures

Henrik Jacobsson; Nick Hawes; Geert-Jan M. Kruijff; Jeremy L. Wyatt

Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (highlevel) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.


international joint conference on artificial intelligence | 2011

Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour

Marc Hanheide; Charles Gretton; Richard Dearden; Nick Hawes; Jeremy L. Wyatt; Andrzej Pronobis; Alper Aydemir; Moritz Göbelbecker; Hendrik Zender

Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particular environment. Our second contribution is a continual planning system which is able to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on object search tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.


international conference on robotics and automation | 2011

Learning to predict how rigid objects behave under simple manipulation

Marek Sewer Kopicki; Sebastian Zurek; Rustam Stolkin; Thomas Mörwald; Jeremy L. Wyatt

An important problem in robotic manipulation is the ability to predict how objects behave under manipulative actions. This ability is necessary to allow planning of object manipulations. Physics simulators can be used to do this, but they model many kinds of object interaction poorly. An alternative is to learn a motion model for objects by interacting with them. In this paper we address the problem of learning to predict the interactions of rigid bodies in a probabilistic framework, and demonstrate the results in the domain of robotic push manipulation. A robot arm applies random pushes to various objects and observes the resulting motion with a vision system. The relationship between push actions and object motions is learned, and enables the robot to predict the motions that will result from new pushes. The learning does not make explicit use of physics knowledge, or any pre-coded physical constraints, nor is it even restricted to domains which obey any particular rules of physics. We use regression to learn efficiently how to predict the gross motion of a particular object. We further show how different density functions can encode different kinds of information about the behaviour of interacting objects. By combining these as a product of densities, we show how learned predictors can cope with a degree of generalisation to previously unencountered object shapes, subjected to previously unencountered push directions. Performance is evaluated through a combination of virtual experiments in a physics simulator, and real experiments with a 5-axis arm equipped with a simple, rigid finger.


Artificial Intelligence | 2010

Planning to see: A hierarchical approach to planning visual actions on a robot using POMDPs

Mohan Sridharan; Jeremy L. Wyatt; Richard Dearden

Flexible, general-purpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in images of a scene, so that a human and a robot can jointly manipulate and converse about objects on a tabletop. We pose visual processing management as an instance of probabilistic sequential decision making, and specifically as a Partially Observable Markov Decision Process (POMDP). The POMDP formulation uses models that quantitatively capture the unreliability of the operators and enable a robot to reason precisely about the trade-offs between plan reliability and plan execution time. Since planning in practical-sized POMDPs is intractable, we partially ameliorate this intractability for visual processing by defining a novel hierarchical POMDP based on the cognitive requirements of the corresponding planning task. We compare our hierarchical POMDP planning system (HiPPo) with a non-hierarchical POMDP formulation and the Continual Planning (CP) framework that handles uncertainty in a qualitative manner. We show empirically that HiPPo and CP outperform the naive application of all visual operators on all ROIs. The key result is that the POMDP methods produce more robust plans than CP or the naive visual processing. In summary, visual processing problems represent a challenging domain for planning techniques and our hierarchical POMDP-based approach for visual processing management opens up a promising new line of research.


european conference on computer vision | 2010

Correlation-based intrinsic image extraction from a single image

Xiaoyue Jiang; Andrew J. Schofield; Jeremy L. Wyatt

Intrinsic images represent the underlying properties of a scene such as illumination (shading) and surface reflectance. Extracting intrinsic images is a challenging, ill-posed problem. Human performance on tasks such as shadow detection and shape-from-shading is improved by adding colour and texture to surfaces. In particular, when a surface is painted with a textured pattern, correlations between local mean luminance and local luminance amplitude promote the interpretation of luminance variations as illumination changes. Based on this finding, we propose a novel feature, local luminance amplitude, to separate illumination and reflectance, and a framework to integrate this cue with hue and texture to extract intrinsic images. The algorithm uses steerable filters to separate images into frequency and orientation components and constructs shading and reflectance images from weighted combinations of these components. Weights are determined by correlations between corresponding variations in local luminance, local amplitude, colour and texture. The intrinsic images are further refined by ensuring the consistency of local texture elements. We test this method on surfaces photographed under different lighting conditions. The effectiveness of the algorithm is demonstrated by the correlation between our intrinsic images and ground truth shading and reflectance data. Luminance amplitude was found to be a useful cue. Results are also presented for natural images.


IEEE Transactions on Autonomous Mental Development | 2010

Self-Understanding and Self-Extension: A Systems and Representational Approach

Jeremy L. Wyatt; Alper Aydemir; Michael Brenner; Marc Hanheide; Nick Hawes; Patric Jensfelt; Matej Kristan; Geert-Jan M. Kruijff; Pierre Lison; Andrzej Pronobis; Kristoffer Sjöö; Alen Vrečko; Hendrik Zender; Michael Zillich; Danijel Skočaj

There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.


systems, man and cybernetics | 2004

A modified approach to fuzzy Q learning for mobile robots

Panrasee Ritthipravat; Thavida Maneewarn; Djitt Laowattana; Jeremy L. Wyatt

A modified approach to fuzzy Q-learning is presented in this paper. A reward sharing mechanism is added to increase the learning speed and to allow treatment of each fuzzy rule as a separate learning node. A new method of exploration is also proposed to increase the learning performance. Two basic robot behaviours which are a goal-seeking and an obstacle avoidance behaviour are simulated to show the promise of the proposed techniques. The goal-seeking behaviour is implemented on a real robot. The experimental results show that this method is practical for a real-world problem.


The International Journal of Robotics Research | 2016

One-shot learning and generation of dexterous grasps for novel objects

Marek Sewer Kopicki; Renaud Detry; Maxime Adjigble; Rustam Stolkin; Aleš Leonardis; Jeremy L. Wyatt

This paper presents a method for one-shot learning of dexterous grasps and grasp generation for novel objects. A model of each grasp type is learned from a single kinesthetic demonstration and several types are taught. These models are used to select and generate grasps for unfamiliar objects. Both the learning and generation stages use an incomplete point cloud from a depth camera, so no prior model of an object shape is used. The learned model is a product of experts, in which experts are of two types. The first type is a contact model and is a density over the pose of a single hand link relative to the local object surface. The second type is the hand-configuration model and is a density over the whole-hand configuration. Grasp generation for an unfamiliar object optimizes the product of these two model types, generating thousands of grasp candidates in under 30 seconds. The method is robust to incomplete data at both training and testing stages. When several grasp types are considered the method selects the highest-likelihood grasp across all the types. In an experiment, the training set consisted of five different grasps and the test set of 45 previously unseen objects. The success rate of the first-choice grasp is 84.4% or 77.7% if seven views or a single view of the test object are taken, respectively.

Collaboration


Dive into the Jeremy L. Wyatt's collaboration.

Top Co-Authors

Avatar

Nick Hawes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rustam Stolkin

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Aaron Sloman

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Zillich

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gavin Brown

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge