Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marek Sewer Kopicki is active.

Publication


Featured researches published by Marek Sewer Kopicki.


international conference on robotics and automation | 2011

Learning to predict how rigid objects behave under simple manipulation

Marek Sewer Kopicki; Sebastian Zurek; Rustam Stolkin; Thomas Mörwald; Jeremy L. Wyatt

An important problem in robotic manipulation is the ability to predict how objects behave under manipulative actions. This ability is necessary to allow planning of object manipulations. Physics simulators can be used to do this, but they model many kinds of object interaction poorly. An alternative is to learn a motion model for objects by interacting with them. In this paper we address the problem of learning to predict the interactions of rigid bodies in a probabilistic framework, and demonstrate the results in the domain of robotic push manipulation. A robot arm applies random pushes to various objects and observes the resulting motion with a vision system. The relationship between push actions and object motions is learned, and enables the robot to predict the motions that will result from new pushes. The learning does not make explicit use of physics knowledge, or any pre-coded physical constraints, nor is it even restricted to domains which obey any particular rules of physics. We use regression to learn efficiently how to predict the gross motion of a particular object. We further show how different density functions can encode different kinds of information about the behaviour of interacting objects. By combining these as a product of densities, we show how learned predictors can cope with a degree of generalisation to previously unencountered object shapes, subjected to previously unencountered push directions. Performance is evaluated through a combination of virtual experiments in a physics simulator, and real experiments with a 5-axis arm equipped with a simple, rigid finger.


The International Journal of Robotics Research | 2016

One-shot learning and generation of dexterous grasps for novel objects

Marek Sewer Kopicki; Renaud Detry; Maxime Adjigble; Rustam Stolkin; Aleš Leonardis; Jeremy L. Wyatt

This paper presents a method for one-shot learning of dexterous grasps and grasp generation for novel objects. A model of each grasp type is learned from a single kinesthetic demonstration and several types are taught. These models are used to select and generate grasps for unfamiliar objects. Both the learning and generation stages use an incomplete point cloud from a depth camera, so no prior model of an object shape is used. The learned model is a product of experts, in which experts are of two types. The first type is a contact model and is a density over the pose of a single hand link relative to the local object surface. The second type is the hand-configuration model and is a density over the whole-hand configuration. Grasp generation for an unfamiliar object optimizes the product of these two model types, generating thousands of grasp candidates in under 30 seconds. The method is robust to incomplete data at both training and testing stages. When several grasp types are considered the method selects the highest-likelihood grasp across all the types. In an experiment, the training set consisted of five different grasps and the test set of 45 previously unseen objects. The success rate of the first-choice grasp is 84.4% or 77.7% if seven views or a single view of the test object are taken, respectively.


intelligent robots and systems | 2012

Two-level RRT planning for robotic push manipulation

Claudio Zito; Rustam Stolkin; Marek Sewer Kopicki; Jeremy L. Wyatt

This paper presents an algorithm for planning sequences of pushes, by which a robotic arm equipped with a single rigid finger can move a manipulated object (or manipulandum) towards a desired goal pose. Pushing is perhaps the most basic kind of manipulation, however it presents difficult challenges for planning, because of the complex relationship between manipulative pushing actions and resulting manipulandum motions. The motion planning literature has well developed paradigms for solving e.g. the piano-movers problem, where the search occurs directly in the configuration space of the manipulandum object being moved. In contrast, in pushing manipulation, a plan must be built in the action space of the robot, which is only indirectly linked to the motion space of the manipulandum through a complex interaction for which inverse models may not be known. In this paper, we present a two stage approach to planning pushing operations. A global RRT path planner is used to explore the space of possible manipulandum configurations, while a local push planner makes use of predictive models of pushing interactions, to plan sequences of pushes to move the manipulandum from one RRT node to the next. The effectiveness of the algorithm is demonstrated in simulation experiments in which a robot must move a rigid body through complex 3D transformations by applying only a sequence of simple single finger pushes.


international conference on robotics and automation | 2014

Learning dexterous grasps that generalise to novel objects by combining hand and contact models

Marek Sewer Kopicki; Renaud Detry; Florian Schmidt; Christoph Borst; Rustam Stolkin; Jeremy L. Wyatt

Generalising dexterous grasps to novel objects is an open problem. We show how to learn grasps for high DoF hands that generalise to novel objects, given as little as one demonstrated grasp. During grasp learning two types of probability density are learned that model the demonstrated grasp. The first density type (the contact model) models the relationship of an individual finger part to local surface features at its contact point. The second density type (the hand configuration model) models the whole hand configuration during the approach to grasp. When presented with a new object, many candidate grasps are generated, and a kinematically feasible grasp is selected that maximises the product of these densities. We demonstrate 31 successful grasps on novel objects (an 86% success rate), transferred from 16 training grasps. The method enables: transfer of dexterous grasps within object categories; across object categories; to and from objects where there is no complete model of the object available; and using two different dexterous hands.


intelligent robots and systems | 2013

Sequential trajectory re-planning with tactile information gain for dexterous grasping under object-pose uncertainty

Claudio Zito; Marek Sewer Kopicki; Rustam Stolkin; Christoph Borst; Florian Schmidt; Maximo A. Roa; Jeremy L. Wyatt

Dexterous grasping of objects with uncertain pose is a hard unsolved problem in robotics. This paper solves this problem using information gain re-planning. First we show how tactile information, acquired during a failed attempt to grasp an object can be used to refine the estimate of that objects pose. Second, we show how this information can be used to replan new reach to grasp trajectories for successive grasp attempts. Finally we show how reach-to-grasp trajectories can be modified, so that they maximise the expected tactile information gain, while simultaneously delivering the hand to the grasp configuration that is most likely to succeed. Our main novel outcome is thus to enable tactile information gain planning for Dexterous, high degree of freedom (DoFs) manipulators. We achieve this using a combination of information gain planning, hierarchical probabilistic roadmap planning, and belief updating from tactile sensors for objects with non-Gaussian pose uncertainty in 6 dimensions. The method is demonstrated in trials with simulated robots. Sequential replanning is shown to achieve a greater success rate than single grasp attempts, and trajectories that maximise information gain require fewer re-planning iterations than conventional planning methods before a grasp is achieved.


international conference on robotics and automation | 2011

Predicting the unobservable Visual 3D tracking with a probabilistic motion model

Thomas Mörwald; Marek Sewer Kopicki; Rustam Stolkin; Jeremy L. Wyatt; Sebastian Zurek; Michael Zillich; Markus Vincze

Visual tracking of an object can provide a powerful source of feedback information during complex robotic manipulation operations, especially those in which there may be uncertainty about which new object pose may result from a planned manipulative action. At the same time, robotic manipulation can provide a challenging environment for visual tracking, with occlusions of the object by other objects or by the robot itself, and sudden changes in object pose that may be accompanied by motion blur. Recursive filtering techniques use motion models for predictor-corrector tracking, but the simple models typically used often fail to adequately predict the complex motions of manipulated objects. We show how statistical machine learning techniques can be used to train sophisticated motion predictors, which incorporate additional information by being conditioned on the planned manipulative action being executed. We then show how these learned predictors can be used to propagate the particles of a particle filter from one predictor-corrector step to the next, enabling a visual tracking algorithm to maintain plausible hypotheses about the location of an object, even during severe occlusion and other difficult conditions. We demonstrate the approach in the context of robotic push manipulation, where a 5-axis robot arm equipped with a rigid finger applies a series of pushes to an object, while it is tracked by a vision algorithm using a single camera.


Autonomous Robots | 2017

Learning modular and transferable forward models of the motions of push manipulated objects

Marek Sewer Kopicki; Sebastian Zurek; Rustam Stolkin; Thomas Moerwald; Jeremy L. Wyatt

The ability to predict how objects behave during manipulation is an important problem. Models informed by mechanics are powerful, but are hard to tune. An alternative is to learn a model of the object’s motion from data, to learn to predict. We study this for push manipulation. The paper starts by formulating a quasi-static prediction problem. We then pose the problem of learning to predict in two different frameworks: (i) regression and (ii) density estimation. Our architecture is modular: many simple, object specific, and context specific predictors are learned. We show empirically that such predictors outperform a rigid body dynamics engine tuned on the same data. We then extend the density estimation approach using a product of experts. This allows transfer of learned motion models to objects of novel shape, and to novel actions. With the right representation and learning method, these transferred models can match the prediction performance of a rigid body dynamics engine for novel objects or actions.


intelligent robots and systems | 2016

Task-relevant grasp selection: A joint solution to planning grasps and manipulative motion trajectories

M E Amir Ghalamzan; Nikos Mavrakis; Marek Sewer Kopicki; Rustam Stolkin; Aleš Leonardis

This paper addresses the problem of jointly planning both grasps and subsequent manipulative actions. Previously, these two problems have typically been studied in isolation, however joint reasoning is essential to enable robots to complete real manipulative tasks. In this paper, the two problems are addressed jointly and a solution that takes both into consideration is proposed. To do so, a manipulation capability index is defined, which is a function of both the task execution waypoints and the object grasping contact points. We build on recent state-of-the-art grasp-learning methods, to show how this index can be combined with a likelihood function computed by a probabilistic model of grasp selection, enabling the planning of grasps which have a high likelihood of being stable, but which also maximise the robots capability to deliver a desired post-grasp task trajectory. We also show how this paradigm can be extended, from a single arm and hand, to enable efficient grasping and manipulation with a bi-manual robot. We demonstrate the effectiveness of the approach using experiments on a simulated as well as a real robot.


intelligent robots and systems | 2014

Kinematically optimised predictions of object motion

Dominik Belter; Marek Sewer Kopicki; Sebastian Zurek; Jeremy L. Wyatt

Predicting the motions of rigid objects under contacts is a necessary precursor to planning of robot manipulation of objects. On the one hand physics based rigid body simulations are used, and on the other learning approaches are being developed. The advantage of physics simulations is that because they explicitly perform collision checking they respect kinematic constraints, producing physically plausible predictions. The advantage of learning approaches is that they can capture the effects on motion of unobservable parameters such as mass distribution, and frictional coefficients, thus producing more accurate predicted trajectories. This paper shows how to bring together the advantages of both approaches to achieve learned simulators of specific objects that outperform previous learning approaches. Our approach employs a fast simplified collision checker and a learning method. The learner predicts trajectories for the object. These are optimised post prediction to minimise interpenetrations according to the collision checker. In addition we show that cleaning the training data prior to learning can also improve performance. Combining both approaches results in consistently strong prediction performance. The new simulator outperforms previous learning based approaches on a single contact push manipulation prediction task. We also present results showing that the method works for multi-contact manipulation, for which rigid body simulators are notoriously unstable.


international conference on robotics and automation | 2016

Towards advanced robotic manipulation for nuclear decommissioning: A pilot study on tele-operation and autonomy

Naresh Marturi; Alireza Rastegarpanah; Chie Takahashi; Maxime Adjigble; Rustam Stolkin; Sebastian Zurek; Marek Sewer Kopicki; Mohammed Talha; Jeffrey A. Kuo; Yasemin Bekiroglu

We present early pilot-studies of a new international project, developing advanced robotics to handle nuclear waste. Despite enormous remote handling requirements, there has been remarkably little use of robots by the nuclear industry. The few robots deployed have been directly teleoperated in rudimentary ways, with no advanced control methods or autonomy. Most remote handling is still done by an aging workforce of highly skilled experts, using 1960s style mechanical Master-Slave devices. In contrast, this paper explores how novice human operators can rapidly learn to control modern robots to perform basic manipulation tasks; also how autonomous robotics techniques can be used for operator assistance, to increase throughput rates, decrease errors, and enhance safety. We compare humans directly teleoperating a robot arm, against human-supervised semi-autonomous control exploiting computer vision, visual servoing and autonomous grasping algorithms. We show how novice operators rapidly improve their performance with training; suggest how training needs might scale with task complexity; and demonstrate how advanced autonomous robotics techniques can help human operators improve their overall task performance. An additional contribution of this paper is to show how rigorous experimental and analytical methods from human factors research, can be applied to perform principled scientific evaluations of human test-subjects controlling robots to perform practical manipulative tasks.

Collaboration


Dive into the Marek Sewer Kopicki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rustam Stolkin

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudio Zito

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Sloman

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ermano Arruda

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge