Johan Rutgeerts
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Johan Rutgeerts.
The International Journal of Robotics Research | 2007
Joris De Schutter; Tinne De Laet; Johan Rutgeerts; Wilm Decré; Ruben Smits; Erwin Aertbeliën; Kasper Claes; Herman Bruyninckx
This paper introduces a systematic constraint-based approach to specify complex tasks of general sensor-based robot systems consisting of rigid links and joints. The approach integrates both instantaneous task specification and estimation of geometric uncertainty in a unified framework. Major components are the use of feature coordinates, defined with respect to object and feature frames, which facilitate the task specification, and the introduction of uncertainty coordinates to model geometric uncertainty. While the focus of the paper is on task specification, an existing velocity- based control scheme is reformulated in terms of these feature and uncertainty coordinates. This control scheme compensates for the effect of time varying uncertainty coordinates. Constraint weighting results in an invariant robot behavior in case of conflicting constraints with heterogeneous units. The approach applies to a large variety of robot systems (mobile robots, multiple robot systems, dynamic human-robot interaction, etc.), various sensor systems, and different robot tasks. Ample simulation and experimental results are presented.
IEEE Transactions on Robotics | 2007
Wim Meeussen; Johan Rutgeerts; Klaas Gadeyne; Herman Bruyninckx; Joris De Schutter
This paper presents a contribution to programming by human demonstration, in the context of compliant-motion task specification for sensor-controlled robot systems that physically interact with the environment. One wants to learn about the geometric parameters of the task and segment the total motion executed by the human into subtasks for the robot, that can each be executed with simple compliant-motion task specifications. The motion of the human demonstration tool is sensed with a 3-D camera, and the interaction with the environment is sensed with a force sensor in the human demonstration tool. Both measurements are uncertain, and do not give direct information about the geometric parameters of the contacting surfaces, or about the contact formations (CFs) encountered during the human demonstration. The paper uses a Bayesian sequential Monte Carlo method (also known as a particle filter) to do the simultaneous estimation of the CF (discrete information) and the geometric parameters (continuous information). The simultaneous CF segmentation and the geometric parameter estimation are helped by the availability of a contact state graph of all possible CFs. The presented approach applies to all compliant-motion tasks involving polyhedral objects with a known geometry, where the uncertain geometric parameters are the poses of the objects. This work improves the state of the art by scaling the contact estimation to all possible contacts, by presenting a prediction step based on the topological information of a contact state graph, and by presenting efficient algorithms that allow the estimation to operate in real time. In real-world experiments, it is shown that the approach is able to discriminate in real time between some 250 different CFs in the graph
intelligent robots and systems | 2005
Johan Rutgeerts; Peter Slaets; F. Schillebeeckx; Wim Meeussen; Walter Verdonck; B. Stallaert; P. Princen; Tine Lefebvre; Herman Bruyninckx; J. De Schutter
This paper presents a modular demonstration tool for robot programming by human demonstration and an approach for the calibration of the tools sensors. The tool is equipped with a wrench sensor, twelve LED markers for fast and accurate six dimensional position tracking with the Krypton K600 camera system, a compact camera and a laser distance sensor. A gripper mechanism is mounted on the tool for grasping and manipulating objects. The design of the tool specifically focused on the demonstration of compliant motion task, with applications in manipulation and assembly tasks. The calibration approach first uses an extended Kalman Filter to convert the measured positions of three to twelve visible LEDs into the pose of the tool frame relative to the Krypton camera frame. Then, using a non minimal state Kalman filter, the force sensor calibration parameters are calculated, and the orientation of the Krypton camera frame relative to the world frame is defined. This calibration approach is verified in a real world experiment.
international symposium on experimental robotics | 2006
Peter Slaets; Johan Rutgeerts; Klaas Gadeyne; Tine Lefebvre; Herman Bruyninckx; Joris De Schutter
This paper describes the construction of a geometric 3-D model from the identification of geometrical parameters (vertices and faces) of rigid polyhedral objects in the environment during the force-controlled execution of contact formation sequences. Following improvements with respect to the state of the art are made: (i) creation of a 3-D model from a previously unknown environment, (ii) the estimation of a force decomposition useful for feedback to a force controller or for monitoring the contact forces, (iii) a method to reduce the number of modelling parameters, leading to a computational reduction, a better precision and a more accurate geometric description.
IEEE Transactions on Robotics | 2007
Peter Slaets; Tine Lefebvre; Johan Rutgeerts; Herman Bruyninckx; J. De Schutter
This paper describes the construction of a local polyhedral 3-D feature model derived from pose and wrench sensor measurements collected during a force-controlled execution of a sequence of polyhedral contact formations. The procedure consists of two steps: 1) the sensor measurements are filtered, resulting in a (nonminimal) representation of a 3-D feature model; and 2) identification of a reduced set of geometric parameters (vertices and faces) of the rigid polyhedral objects in the environment is performed. The following improvements with respect to the state of the art are made: 1) creation of a polyhedral 3-D feature model of a previously unknown polyhedral environment; 2) estimation of a force decomposition useful for feedback in a force controller or for monitoring the contact forces; and 3) reduction of the number of model parameters, leading to a computational reduction, a better precision of the geometric parameters, and a higher level description of the contact topology
ISRR | 2005
Herman Bruyninckx; J. De Schutter; Tine Lefebvre; Klaas Gadeyne; Peter Soetens; Johan Rutgeerts; Peter Slaets; Wim Meeussen
This paper presents our research group’s latest results in autonomous force-controlled manipulation tasks: (i) advanced non-linear estimators for simultaneous parameter estimation and contact formation “map building” for 6D contact tasks (with active sensing integrated into the task planner), and (ii) the application of these results to programming by human demonstration, for tasks involving contacts.
intelligent robots and systems | 2007
T. De Laet; Wilm Decré; Johan Rutgeerts; Herman Bruyninckx; J. De Schutter
This paper shows the application of a systematic approach for constraint-based task specification for sensor-based robot systems to a laser tracing example. This approach integrates both task specification and estimation of geometric uncertainty in a unified framework. The framework consists of an application independent control and estimation scheme. An automatic derivation of controller and estimator equations is achieved, based on a geometric task model that is obtained using a systematic task modeling procedure. The paper details the systematic modeling procedure for the laser tracing task and elaborates on the task specific choice of two types of task coordinates: feature coordinates, defined with respect to object and feature frames, which facilitate the task specification, and uncertainty coordinates to model geometric uncertainty. Furthermore, the control and estimation scheme for this specific task is studied. Simulation and real world experimental results are presented for the laser tracing example.
international conference on multisensor fusion and integration for intelligent systems | 2006
Wim Meeussen; Johan Rutgeerts; Klaas Gadeyne; Herman Bruyninckx; Joris De Schutter
This paper presents a contribution to Bayesian based sensor fusing in the context of human demonstration for compliant motion task specification, where sensor-controlled robot systems physically interact with the environment. One wants to learn about the geometric parameters of a task and segment the total motion executed during the human demonstration into subtasks for the robot. The motion of the human demonstration tool is sensed by measuring the position of multiple LED markers with a 3D camera, and the interaction with the environment is sensed with a force/torque sensor inside the demonstration tool. All measurements are uncertain, and do not give direct information about the geometric parameters of the contacting surfaces, or about the contact formations encountered during the human demonstration. The paper uses a Bayesian sequential Monte Carlo method (also known as a particle filter) to simultaneously estimate the contact formation (discrete information) and the geometric parameters (continuous information), where different measurement models link the information from heterogeneous sensors to the hybrid unknown parameters. The simultaneous contact formation segmentation and the geometric parameter estimation are helped by the availability of a contact state graph of all possible contact formations. The presented approach applies to all compliant motion tasks involving polyhedral objects with a known geometry, where the uncertain geometric parameters are the poses of the objects. The approach has been verified in real world experiments, in which it is able to discriminate in realtime between some 250 different contact formations in the graph
conference on computer as a tool | 2007
Wilm Decré; T. De Laet; Johan Rutgeerts; Herman Bruyninckx; J. De Schutter
This paper shows the application of a generic constraint-based task specification approach for sensor-based robot systems to a laser tracing example. Key properties of the used approach are (i) its ability to specify complex robot tasks by introducing auxiliary task-oriented feature coordinates, defined with respect to user-defined object and feature frames, (ii) its support for both underconstrained and overconstrained robot tasks, and (iii) its ability to integrate sensor measurements in a unified way, using auxiliary uncertainty coordinates, to estimate geometric uncertainties in the robot system or its environment. Simulation and real world experimental results are presented.
international conference on robotics and automation | 2005
J. De Schutter; Johan Rutgeerts; Erwin Aertbeliën; F. De Groote; T. De Laet; Tine Lefebvre; Walter Verdonck; Herman Bruyninckx