Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean Gallice is active.

Publication


Featured researches published by Jean Gallice.


international conference on robotics and automation | 2002

Position based visual servoing: keeping the object in the field of vision

Benoit Thuilot; Philippe Martinet; Lionel Cordesses; Jean Gallice

Visual servoing requires an object in the field of view of the camera, in order to control the robot evolution. Otherwise, the virtual link is broken and the control loop cannot continue to be closed. In this paper, a novel approach is presented in order to guarantee that the object remains in the field of view of the camera during the whole robot motion. It consists in tracking an iteratively computed trajectory. A position based modeling adapted to a moving target object is established, and is used to control the trajectory. A nonlinear decoupling approach is then used to control the robot. Experiments, demonstrating the capabilities of this approach, have been conducted on a Cartesian robot connected to a real time vision system, with a CCD camera mounted on the end effector of the robot.


international conference on robotics and automation | 1996

Visual servoing in robotics scheme using a camera/laser-stripe sensor

Djamel Khadraoui; Guy Motyl; Philippe Martinet; Jean Gallice; François Chaumette

The work presented in this paper belongs to the realm of robotics and computer vision. The problem we seek to solve is the accomplishment of robotics tasks using visual features provided by a special sensor, mounted on a robot end effector. This sensor consists of two laser stripes fixed rigidly to a camera, projecting planar light on the scene. First, we briefly describe the classical visual servoing approach. We then generalize this approach to the case of our special sensor by considering its interaction with respect to a sphere. This interaction permits us to establish a kinematics relation between the sensor and the scene. Finally, both in simulation and in our experimental cell, the results are presented. They concern the positioning task with respect to a sphere, and show the robustness and the stability of the control scheme.


machine vision applications | 1994

Road obstacle detection and tracking by an active and intelligent sensing strategy

Ming Xie; Laurent Trassoudaine; Joseph Alizon; Jean Gallice

In this paper, we address the problem of road obstacle deletion. We propose a method based on an active and intelligent sensing strategy. A sensor composed of a range finder coupled with a (charge-coupled-device) CCD camera is used. This sensor is mounted in front of a vehicle. The basic idea is first to determine 2D visual targets in intensity images of the camera. Then the range finder will be used not only to confirm or reject the existence of the detected visual targets, but also to acquire 3D information of the confirmed visual targets. The central problem of this strategy is how to detect 2D visual targets from intensity images of a road scene. In our method, we consider line segments as significant features. We use the concept ofline segment of interest and the concept ofdominant line segment. With the help of the identified dominant line segments in an image, we can effectively ascertain 2D visual targets. Finally, we use the range finder to confirm or reject a 2D visual target. A confirmed visual target is temporally tracked with the help of the range finder.


intelligent robots and systems | 1999

Position based visual servoing using a non-linear approach

Philippe Martinet; Jean Gallice

Vision based control has retained attention of many authors during the last few years. We first have been interested in image based visual servoing approach and recently we have focused our attention in position based visual servoing approach. In this paper, our goal is to study how we can introduce 3D visual features in a closed robot control loop. We consider a camera mounted on the end effector of the manipulator robot to estimate the pose of the target object; The required positioning task is to reach a specific pose between the sensor frame and a target object frame. Knowing the target object model, we can localize the object in the 3D visual sensor frame and estimate the pose between the camera and the target object at each iteration. To perform the visual servoing task, we use a nonlinear state feedback. We propose a new exact model for parametrization of the pose (position and the orientation of the frame object in the sensor frame). The main advantage of this approach is that camera translation and camera rotation are separately controlled due to use of a particular choice of frames. Convergence and stability have been proved theoretically, and the tests in simulation and on our experimental site show good behaviour using this type of approach.


international conference on computer vision | 1993

Active and intelligent sensing of road obstacles: Application to the European Eureka-PROMETHEUS project

Ming Xie; Laurent Trassoudaine; Joseph Alizon; Monique Thonnat; Jean Gallice

The authors address the problem of road obstacle detection. A sensor composed of an eyesafe laser range finder coupled with a charge coupled device (CCD) camera is proposed. This sensor is mounted in front of a vehicle. The basic idea is to first determine 2-D visual targets in intensity images of the camera. The range finder is then used not only to confirm or reject the real existence of the detected visual targets but also to acquire 3-D information of the confirmed visual targets. The central problem of this strategy is the method of detection of 2-D visual targets from intensity images of a road scene. In the method, line segments are considered as significant features. The concept of a line segment of interest and the concept of a dominant line segment are used. Two-imensional visual targets can be effectively determined with the help of the identification of the dominant line segments in an image. The range finder was used to confirm or reject a 2-D visual target.<<ETX>>


intelligent robots and systems | 1997

Trajectory generation by visual servoing

François Berry; Philippe Martinet; Jean Gallice

Describes an approach to the problem of trajectory generation in a workspace by visual servoing. Visual servoing is based on an array of measurements taken from a set of images and used each time as an error function to compute a control vector. This is applied to the system (robot and camera) and enables it to move in order to reach a desired situation, at the end of the task, directly depicted in the image. The originality of this work is based on the concept of a time varying reference feature. Classically, in visual servoing, the reference features are static and the task to be achieved is similar to a positioning task. We define a specific task function which allows us to take into account the time varying aspect and we synthesize a new control law in the sensor space. This control law ensure the trajectory control in the workspace. Considering that any trajectories in workspace can be depicted as a combination of rotation and translation, we have tested our approach using these two elementary trajectories.


international conference on robotics and automation | 1996

Use of first derivative of geometric features in visual servoing

Philippe Martinet; François Berry; Jean Gallice

Visual servoing is based on an array of measurements taken from a set of images and used each time as an error function to compute a control vector. This is applied to the system (robot and camera) and makes it move in order to reach a desired situation, at the end of the task, directly depicted in the image. The originality of this recent work consists in improving the visual servoing approach. To do this, we consider a signal sensor vector constructed of a geometrical feature ((x,y) point coordinates, line parameters, etc.) and its first derivative. In this paper, we show how to work out the interaction matrix. We have tested this new approach on a workstation in the case of the point feature. We have also implemented it on our robotic platform. The overall results show a great improvement due to the action of this new signal sensor. We are extending this approach to more complex features.


Solid State Communications | 1977

Electronic spin-lattice relaxation time in TTF-TCNQ and Rb-TCNQ

G. Berthet; J. P. Blanc; Jean Gallice; H. Robert; C. Thibaud; J.M. Fabre; L. Giral

Abstract Measurements at room temperature of the electronic spin lattice relaxation time in tetrathiafulvalene tetracyanoquinodimethan (TTF-TCNQ) and rubidium-TCNQ (II) are reported. A tentative discussion of the experimental results is given.


intelligent robots and systems | 1999

Visual feedback in camera motion generation: experimental results

François Berry; Philippe Martinet; Jean Gallice

We propose several results about trajectory generation by visual servoing. The approach consists of defining a specific task function which allows one to take into account the time varying aspect of the reference feature and to synthesize a control law in the sensor space. This control law ensures the trajectory control in the image space and reduces the tracking error. Under specific conditions, the trajectory of the camera can be ensured in the robot workspace. The main goal of this work is to demonstrate the effectiveness of this approach through experimental results. In the experiments, we used a Cartesian robot and a real time vision system. A CCD camera was mounted on the end effector of the robot. We present two types of trajectory. The first one is a helical trajectory parallel to a cube side. The second one involves passing around a cube. This latter is built by linking several elementary trajectories (rotation and translation).


intelligent robots and systems | 2000

Turning around an unknown object using visual servoing

François Berry; Philippe Martinet; Jean Gallice

In this paper, the problem of controlling a motion by visual servoing around an unknown object is addressed. These works can be interpreted as an initial step towards a perception goal of an unmodeled object. The main purpose is to perform motion with regard to the object in order to discover several viewpoint of the object. The originality of our work is based on the choice and extraction of visual features in accordance with motions to be performed. The notion of invariant feature is introduced to control the navigational task around the unknown object. A real-time experimentation with a complex object is realized and shows the generality of the proposed ideas.

Collaboration


Dive into the Jean Gallice's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Alizon

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar

François Berry

University of Clermont-Ferrand

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Soda

University of Paris-Sud

View shared research outputs
Top Co-Authors

Avatar

Guy Motyl

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Weger

University of Paris-Sud

View shared research outputs
Top Co-Authors

Avatar

Benoit Thuilot

Blaise Pascal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge