Vladimir Ivan
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vladimir Ivan.
intelligent robots and systems | 2014
Karl Pauwels; Vladimir Ivan; Eduardo Ros; Sethu Vijayakumar
We introduce a real-time system for recognizing and tracking the position and orientation of a large number of complex real-world objects, together with an articulated robotic manipulator operating upon them. The proposed system is fast, accurate and reliable and yet does not require precise camera calibration. The key to this high level of performance is a continuously-refined internal 3D representation of all the relevant scene elements. Occlusions are handled implicitly in this approach and a soft-constraint mechanism is used to obtain the highest precision at a specific region-of-interest. The system is well-suited for implementation on Graphics Processing Units and thanks to a tight integration of the latters graphical and computational capability, scene updates can be obtained at framerates exceeding 40 Hz. We demonstrate the robustness and accuracy of this system on a complex real-world manipulation task involving active endpoint closed-loop visual servo control in the presence of both camera and target object motion.
robotics: science and systems | 2012
Dmitry Zarubin; Vladimir Ivan; Marc Toussaint; Taku Komura; Sethu Vijayakumar
Motion can be described in alternative represen- tations, including joint configuration or end-effector spaces, but also more complex topological representations that imply a change of Voronoi bias, metric or topology of the motion space. Certain types of robot interaction problems, e.g. wrapping around an object, can suitably be described by so-called writhe and interaction mesh representations. However, considering mo- tion synthesis solely in topological spaces is insufficient since it does not cater for additional tasks and constraints in other representations. In this paper we propose methods to combine and exploit different representations for motion synthesis, with specific emphasis on generalization of motion to novel situations. Our approach is formulated in the framework of optimal con- trol as an approximate inference problem, which allows for a direct extension of the graphical model to incorporate multiple representations. Motion generalization is similarly performed by projecting motion from topological to joint configuration space. We demonstrate the benefits of our methods on problems where direct path finding in joint configuration space is extremely hard whereas local optimal control exploiting a representation with different topology can efficiently find optimal trajectories. Further, we illustrate the successful online motion generalization to dynamic environments on challenging, real world problems.
The International Journal of Robotics Research | 2013
Vladimir Ivan; Dmitry Zarubin; Marc Toussaint; Taku Komura; Sethu Vijayakumar
Motion can be described in several alternative representations, including joint configuration or end-effector spaces, but also more complex topology-based representations that imply a change of Voronoi bias, metric or topology of the motion space. Certain types of robot interaction problems, e.g. wrapping around an object, can suitably be described by so-called writhe and interaction mesh representations. However, considering motion synthesis solely in a topology-based space is insufficient since it does not account for additional tasks and constraints in other representations. In this paper, we propose methods to combine and exploit different representations for synthesis and generalization of motion in dynamic environments. Our motion synthesis approach is formulated in the framework of optimal control as an approximate inference problem. This allows for consistent combination of multiple representations (e.g. across task, end-effector and joint space). Motion generalization to novel situations and kinematics is similarly performed by projecting motion from topology-based to joint configuration space. We demonstrate the benefit of our methods on problems where direct path finding in joint configuration space is extremely hard whereas local optimal control exploiting a representation with different topology can efficiently find optimal trajectories. In real-world demonstrations, we highlight the benefits of using topology-based representations for online motion generalization in dynamic environments.
Sensors | 2013
Angel Llamazares; Vladimir Ivan; Eduardo J. Molinos; Manuel Ocaña; Sethu Vijayakumar
The goal of this paper is to solve the problem of dynamic obstacle avoidance for a mobile platform by using the stochastic optimal control framework to compute paths that are optimal in terms of safety and energy efficiency under constraints. We propose a three-dimensional extension of the Bayesian Occupancy Filter (BOF) (Coué et al. Int. J. Rob. Res. 2006, 25, 19–30) to deal with the noise in the sensor data, improving the perception stage. We reduce the computational cost of the perception stage by estimating the velocity of each obstacle using optical flow tracking and blob filtering. While several obstacle avoidance systems have been presented in the literature addressing safety and optimality of the robot motion separately, we have applied the approximate inference framework to this problem to combine multiple goals, constraints and priors in a structured way. It is important to remark that the problem involves obstacles that can be moving, therefore classical techniques based on reactive control are not optimal from the point of view of energy consumption. Some experimental results, including comparisons against classical algorithms that highlight the advantages are presented.
robotics and biomimetics | 2016
Yiming Yang; Vladimir Ivan; Wolfgang Merkt; Sethu Vijayakumar
Planning balanced and collision-free motion for humanoid robots is non-trivial, especially when they are operated in complex environments, such as reaching targets behind obstacles or through narrow passages. Research has been done in particular to plan such complex motion on humanoids, however, these approaches are typically restricted to particular robot platforms and environments, which can not be easily replicated nor applied. We propose a method that allows us to apply existing sampling-based algorithms directly to plan trajectories for humanoids by utilizing a customized state space representation, biased sampling strategies, and a steering function based on a robust inverse kinematics solver. Our approach requires no prior offline computation, thus one can easily transfer the work to new robot platforms. We tested the proposed method by solving practical reaching tasks on a 38 degrees-of-freedom humanoid robot, NASA Valkyrie, showing that our method is able to generate valid motion plans that can be executed on advanced full-size humanoid robots.
ieee-ras international conference on humanoid robots | 2016
Yiming Yang; Vladimir Ivan; Zhibin Li; Maurice Fallon; Sethu Vijayakumar
In this paper, we propose a novel inverse Dynamic Reachability Map (iDRM) that allows a floating base system to find valid and sufficient end-poses in complex and changing environments in real-time. End-pose planning, i.e. finding valid stance locations and collision-free reaching configurations, is an essential problem in humanoid applications, such as providing goal states for walking and motion planners. However, it is non-trivial in complex environments, where standing locations and reaching postures are restricted by obstacles. Our proposed approach, iDRM, customizes the robot-to-workspace occupation list and uses an online update algorithm to enable efficient reconstruction of the reachability map to guarantee that the selected end-poses are always collision-free. The iDRM was evaluated in a variety of reaching tasks using the 38 degree-of-freedom (DoF) humanoid robot Valkyrie. Our results show that the approach is capable of finding valid end-poses in a fraction of a second.
international conference on advanced robotics | 2015
Yiming Yang; Vladimir Ivan; Sethu Vijayakumar
Reacting to environment changes is a big challenge for real world robot applications. This paper presents a novel approach that allows the robot to quickly adapt to changes, particularly in the presence of moving targets and dynamic obstacles. Typically, a configuration space replanning or adaptation is required if the environment is changed. Rather, our method aims to maintain a plan, in a relative distance space rather than configuration space, that can be valid in different environments. In addition, we introduce an incremental planning structure that allows us to handle unexpected obstacles that may appear during execution. The main contribution is that the relative distance space representation encodes pose re-targeting, reaching and avoiding tasks within one unified cost term that can be solved in real-time to achieve a fast implementation for high degree of freedom (DOF) robots. We evaluate our method on a 7 DOF LWR robot arm, and a 14 DOF dual-arm Baxter robot.
ieee-ras international conference on humanoid robots | 2013
Peter Sandilands; Vladimir Ivan; Taku Komura; Sethu Vijayakumar
We propose a novel approach to transfer reach and grasp movements while being agnostic and invariant to finger kinematics, hand configurations and relative changes in object dimensions. We exploit a novel representation based on electrostatics to parametrise the salient aspects of the demonstrated grasp. By working in this alternate space that focuses on the relational aspects of the grasp rather than absolute kinematics, we are able to use inference based planning techniques to couple the motion in abstract spaces with trajectories in the configuration space of the robot. We demonstrate that our method computes stable grasps that generalise over objects of different shapes and robots of dissimilar kinematics while retaining the qualitative grasp type - all without expensive collision detection or re-optimisation.
robotics: science and systems | 2017
Leopoldo Armesto; Vladimir Ivan; João Moura; Antonio Sala; Sethu Vijayakumar
Many practical tasks in robotic systems, such as cleaning windows, writing or grasping, are inherently constrained. Learning policies subject to constraints is a challenging problem. We propose a locally weighted constrained projection learning method (LWCPL) that first estimates the constraint and then exploits this estimate across multiple observations of the constrained motion to learn an unconstrained policy. The generalization is achieved by projecting the unconstrained policy onto a new, previously unseen, constraint. We do not require any prior knowledge about the task or the policy, so we can use generic regressors to model the task and the policy. However, any prior beliefs about the structure of the motion can be expressed by choosing task-specific regressors. In particular, we can use robot kinematics and motion priors to improve the accuracy. Our evaluation results show that LWCPL outperform the state of the art method in accuracy of learning the constraints as well as the unconstrained policy, even in noisy conditions. We have validated our method by learning a wiping task from human demonstration on flat surfaces and reproducing it on an unknown curved surface using a force/torque based controller to achieve tool alignment. We show that, despite of the differences between the training and validation scenarios, we learn a policy that still provides the desired wiping motion.
international conference on robotics and automation | 2017
Yiming Yang; Wolfgang Merkt; Henrique Ferrolho; Vladimir Ivan; Sethu Vijayakumar
A key prerequisite for planning manipulation together with locomotion of humanoids in complex environments is to find a valid end-pose with a feasible stance location and a full-body configuration that is balanced and collision-free. Prior work based on the inverse dynamic reachability map assumed that the feet are placed next to each other around the stance location on a horizontal plane, and the success rate was correlated with the coverage density of the sampled space, which in turn is limited by the memory required for storing the map. In this letter, we present a framework that uses a paired forward-inverse dynamic reachability map to exploit a greater modularity of the robots inherent kinematic structure. The combinatorics of this novel decomposition allows greater coverage in the high-dimensional configuration space while reducing the number of stored samples. This permits drawing samples from a much richer dataset to effectively plan end-poses for both single-handed and bimanual tasks on uneven terrains. This novel method was demonstrated on the 38-DoF NASA Valkyrie humanoid by utilizing and exploiting whole body redundancy for accomplishing manipulation tasks on uneven terrains while avoiding obstacles.