Johan Baeten
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Johan Baeten.
IEEE-ASME Transactions on Mechatronics | 2002
Johan Baeten; J. De Schutter
The accuracy and execution speed of a force controlled contour-following task is limited if the shape of the workpiece is unknown. This is even more true when the workpiece contour contains corners. The paper shows how a hybrid vision/force control approach at corners in planar-contour following results in a more accurate and faster task execution. The vision system is used to measure online the contour and to watch out for corners. The edge is correctly located by compensating the compliance of the tool/camera setup which affects the contour measurement. A simple corner-detection algorithm is presented. Once a corner is detected, the finite-state controller is activated to take the corner in the best conditions. Experimental results are presented to validate the approach.
The International Journal of Robotics Research | 2003
Johan Baeten; Herman Bruyninckx; Joris De Schutter
In this paper we show how to use the task frame to easily model, implement and execute three-dimensional (3D) robotic tasks, which integrate force control and visual servoing, in an uncalibrated workspace. In contrast to most hybrid vision/force research, this work uses eye-in-hand vision and force control. Mounting both sensors on the same end-effector gives rise to new constraints, control issues and advantages, which are discussed in this paper. On the one hand, in this paper we emphasize shared control in which both vision and force simultaneously control a given task frame direction. Our work shows the usefulness and feasibility of a range of tasks which use shared control. Moreover, it offers a framework based on the task frame formalism (TFF) to distinguish between different basic forms of shared control. Each basic form is illustrated by a robotic task with shared control in only one direction. In addition, an extension to classify multi-dimensional shared control tasks is presented. On the other hand, a new classification is presented which distinguishes between four meaningful tool/camera configurations, being parallel or non-parallel endpoint closed-loop and fixed or variable endpoint open-loop. Corresponding control strategies are discussed, resulting in either collocated or non-collocated vision/force control. Several task examples (in 3D space), specified in the TFF, illustrate the use of these four configurations. As shown by the presented experimental results, the tasks at hand benefit from the integrated control approach.
Archive | 2004
Johan Baeten; Joris De Schutter
Introduction.- Literature survey.- Framework.- Classification.- Visual Servoing: a 3D Alignment Task.- Planar Contour Following of Continuous Curves.- Planar Contour Following at Corners.- Additional Experiments.- Conclusion.
international conference on robotics and automation | 2002
Johan Baeten; Herman Bruyninckx; J. De Schutter
In contrast to most hybrid vision/force research, this work uses eye-in-hand vision and force control. Mounting both sensors on the same end effector gives rise to new constraints, control issues and advantages, which are discussed in the paper. Four meaningful tool/camera configurations, being parallel or non-parallel endpoint closed-loop and fixed or variable endpoint open-loop are suggested. Several task examples (in 3D space), specified in the task frame formalism, illustrate the use of these four configurations. Experimental results for the fixed EOL configuration are presented.
international conference on multisensor fusion and integration for intelligent systems | 1999
Johan Baeten; Herman Bruyninckx; J. De Schutter
This paper shows how to use Masons compliance frame or task frame to model, implement and execute robotic tasks which combine force control and visual servoing in an uncalibrated workspace. Both vision and force sensors are mounted on the manipulator. Hence, the vision algorithms benefit from the known depth to the contact point, when operating under force controlled contact. The sensor data is fused in a hybrid position/force control scheme with internal velocity controller. The fusing method depends on the high level task description. Experimental results are presented to validate the approach.
international conference on advanced intelligent mechatronics | 1999
Johan Baeten; J. De Schutter
The limited bandwidth of sensor-based feedback control restricts the execution speed of a force controlled planar contour following task, if the shape of the workpiece is unknown. This paper shows how appropriate feedforward control of the task frame orientation, calculated online from an eye-in-hand camera image, results in a higher quality (i.e. faster and/or more accurate) task execution. However, keeping the contour in the camera held of view in addition to maintaining a force controlled contact, imposes additional requirements on the controller. This double control problem is specified in the task frame formalism and executed in a hybrid position/force control environment. It is solved using the redundancy for the rotation in the plane, which exists for rotational symmetric tools. The key in this solution is (continuously) redefining the relation between the task frame and the end effector. Experimental results are presented to validate the approach.
intelligent robots and systems | 2002
Johan Baeten; Herman Bruyninckx; J. De Schutter
Hybrid position/force control can be extended to incorporate visual servoing. Together with the task frame formalism (TFF), it forms the basis to easily model, implement and execute robotic tasks in an uncalibrated workspace, using traded, hybrid or shared vision/force control. This paper emphasizes shared control in which both vision and force simultaneously control a given task frame direction. Our work shows the usefulness and feasibility of a range of tasks using shared control. Moreover, it offers a framework based on the TFF to distinguish between different (and partly new) forms of shared control. Each basic form is illustrated by a robotic task with shared control in one direction. In addition, an extension to classify multi-dimensional shared control tasks is presented. As shown by the experimental results, the tasks at hand benefit from the shared control approach.
international conference on advanced intelligent mechatronics | 2001
Johan Baeten; J. De Schutter
The accuracy and execution speed of a force controlled contour following task is limited if the shape of the workpiece is unknown. This is even more true when the workpiece contour contains corners. This paper shows how a combined vision/force control approach at corners in planar contour following results in a more accurate and faster executed task. The vision system is used to measure on-line the contour and to watch out for corners. The edge is correctly located by incorporating the compliance of the tool/camera set-up in the contour measurement. A simple corner detection algorithm is presented. Once a corner is detected, the finite state controller is activated to take the corner in the best conditions. Experimental results are presented to validate the approach.
Machine Intelligence and Robotic Control | 2000
Johan Baeten; Walter Verdonck; Herman Bruyninckx; Joris De Schutter
Archive | 2003
Johan Baeten; Joris De Schutter