Brahim Tamadazte
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brahim Tamadazte.
The International Journal of Robotics Research | 2010
Brahim Tamadazte; Eric Marchand; Sounkalo Dembélé; N. Le Fort-Piat
This paper investigates sequential robotic microassembly for the construction of 3D micro-electro-mechanical systems (MEMSs) structures using a 3D visual servoing approach. The previous solutions proposed in the literature for these kinds of problems are based on 2D visual control because of the lack of precise and robust 3D measures from the work scene. In this paper, the relevance of the real-time 3D visual tracking method and the 3D vision-based control law proposed is demonstrated. The 3D poses of the MEMSs are supplied in real-time by a computer-aided design model-based tracking algorithm. This algorithm is sufficiently accurate and robust to enable a precise regulation toward zero of the 3D error using the proposed pose-based visual servoing approach. Experiments on a microrobotic setup have been carried out to achieve assemblies of two or more 400 μm × 400 μm × 100 μm silicon micro-objects by their respective 97 μm × 97 μm × 100 μm notches with an assembly clearance from 1 μm to 5 μm. The different microassembly processes are performed with a mean error of 0.3 μm in position and 0.35×10 −2 rad in orientation.
IEEE-ASME Transactions on Mechatronics | 2012
Brahim Tamadazte; Nadine Piat; Eric Marchand
This paper demonstrates an accurate nanopositioning scheme based on a direct visual servoing process. This technique uses only the pure image signal (photometric information) to design the visual servoing control law. With respect to traditional visual servoing approaches that use geometric visual features (points, lines ...), the visual features used in the control law is the pixel intensity. The proposed approach has been tested in term of accuracy and robustness in several experimental conditions. The obtained results have demonstrated a good behavior of the control law and very good positioning accuracy. The obtained accuracies are 89 nm, 14 nm, and 0.001° in the x-, y -, and θ-axes of a positioning platform, respectively.
international conference on advanced intelligent mechatronics | 2009
Brahim Tamadazte; Thomas Arnould; Sounkalo Dembélé; Nadine Le Fort-Piat; Eric Marchand
Robotic microassembly is a promising way to fabricate micrometric components based three dimensions (3D) compound products where the materials or the technologies are incompatible: structures, devices, Micro Electro Mechanical Systems (MEMS), Micro Opto Electro Mechanical Systems (MOEMS), etc. To date, solutions proposed in the literature are based on 2D visual control because of the lack of accurate and robust 3D measures from the work scene. In this paper the relevance of the real-time 3D visual tracking and control is demonstrated. The 3D poses of the MEMS is supplied by a model-based tracking algorithm in real-time. It is accurate and robust enough to enable a precise regulation toward zero of a 3D error using a visual servoing approach. The assembly of 400 µm × 400 µm × 100 µm parts by their 100 µm × 100 µm × 100 µm notches with a mechanical play of 3 µm is achieved with a rate of 41 seconds per assembly. The control accuracy reaches 0.3 µm in position and 0.2° in orientation.
intelligent robots and systems | 2009
Brahim Tamadazte; Nadine Le Fort-Piat; Sounkalo Dembélé; Eric Marchand
This paper describes the vision-based methods developed for assembly of complex and solid 3D MEMS (micro electromechanical systems) structures. The microassembly process is based on sequential robotic operations such as planar positioning, gripping, orientation in space and insertion tasks. Each of these microassembly tasks is performed using a pose-based visual control. To be able to control the microassembly process, a 3D model-based tracker is used. This tracker is able to directly provide the 3D micro-object pose at real-time and from only a single view of the scene. The methods proposed in this paper are validated on an automatic assembly of fives silicon microparts of 400 ¿m × 400 ¿m × 100 ¿m on 3-levels. The insertion tolerance (mechanical play) is estimated to 3 ¿m. The precision of this insertion tolerance allows us to obtain solid and complex micro electromechanical structures without any external joining (glue, wending. Promising positioning and orientation accuracies are obtained who can reach 0.3 ¿m in position and 0.2° in orientation.
conference on automation science and engineering | 2008
Brahim Tamadazte; Sounkalo Dembélé; Guillaume Fortier; N. Le Fort-Piat
The paper deals with the manipulation of silicon microcomponents in order to assembly automatically. The size of the components vary from 600 mum times 400 mum times 100 mum to 300 mum times 300 mum times 100 mum with a notch of 100 mum thickness on every side. The microassembly process is split up into elementary tasks (aligning component, positioning component, centering component, opening gripper, ...) where every one is achieved by visual servoing. The control laws are of the type exponential or polynomial decrease of error according to the task. The performing of the latter has required the implementation of an effective tracking algorithm in combination with a depth-from-focus technique in order to maintain the target in focus and to recover the distance between the gripper and the component. The process includes the adaptation of the video microscope magnification to the required resolution (coarse to fine servoings). A multiple scale modelling and calibration of the photon video microscope is performed. The picking and placing of above components were achieved : the errors of positioning are respectively 1.4 mum in x and y and 0.5 degree in orientation.
intelligent robots and systems | 2013
Naresh Marturi; Brahim Tamadazte; Sounkalo Dembélé; Nadine Piat
Fast and reliable autofocusing methods are essential for performing automatic nano-objects positioning tasks using a scanning electron microscope (SEM). So far in the literature, various autofocusing algorithms have been proposed utilizing a sharpness measure to compute the best focus. Most of them are based on iterative search approaches; applying the sharpness function over the total range of focus to find an image in-focus. In this paper, a new, fast and direct method of autofocusing has been presented based on the idea of traditional visual servoing to control the focus step using an adaptive gain. The visual control law is validated using a normalized variance sharpness function. The obtained experimental results demonstrate the performance of the proposed autofocusing method in terms of accuracy, speed and robustness.
international conference on robotics and automation | 2014
Naresh Marturi; Brahim Tamadazte; Sounkalo Dembélé; Nadine Piat
This paper presents two visual servoing approaches for nanopositioning in a scanning electron microscope (SEM). The first approach uses the total pixel intensities of an image as visual measurements for designing the control law. The positioning error and the platform control are directly linked with the intensity variations. The second approach is a frequency domain method that uses Fourier transform to compute the relative motion between images. In this case, the control law is designed to minimize the error i.e. the 2D motion between current and desired images by controlling the positioning platform movement. Both methods are validated at different experimental conditions for a task of positioning silicon microparts using a piezo-positioning platform. The obtained results demonstrate the efficiency and robustness of the developed methods.
intelligent robots and systems | 2014
Brahim Tamadazte; Nicolas Andreff
This paper deals with the study of a weakly calibrated multiview visual servoing control law for microrobotic laser phonomicrosurgery of the vocal folds. It consists of the development of an endoluminal surgery system for laser ablation and resection of cancerous tissues. More specifically, this paper focuses on the part concerning the control of the laser spot displacement during surgical interventions. To perform this, a visual control law based on trifocal geometry is designed using two cameras and a laser source (virtual camera). The method is validated on a realistic testbench and the straight point-to-point trajectories are demonstrated.
The International Journal of Robotics Research | 2016
Nicolas Andreff; Brahim Tamadazte
This paper focuses on the development of a weakly calibrated three-view-based visual servoing control law applied to the laser steering process. It proposes to revisit the conventional trifocal constraints governing a three-view geometry for a more suitable use in the design of an efficient trifocal vision-based control. Thereby, an explicit control law is derived, without any matrix inversion, which allows to simply prove the global exponential stability of the control. Moreover, only ‘ twenty - five lines of code ’ are necessary to design a fast trifocal control system. Thanks to the simplicity of the implementation, our control law is fast, accurate, robust to errors on the weak calibration and it exhibits good behavior in terms of convergence and decoupling. This was demonstrated by different experimental validations performed on a test-bench for the steering of laser spot on a 2D and 3D surface using a two degrees-of-freedom commercial piezoelectric mirror, as well as in preliminary cadaver trials using an endoluminal micromirror prototype.
IEEE Transactions on Instrumentation and Measurement | 2016
Naresh Marturi; Brahim Tamadazte; Sounkalo Dembélé; Nadine Piat
Depth estimation for micronanomanipulation inside a scanning electron microscope (SEM) is always a major concern. So far, in the literature, various methods have been proposed based on stereoscopic imaging. Most of them require external hardware unit or manual interaction during the process. In this paper, solely relying on image sharpness information, we present a new technique to estimate the depth in real time. To improve the accuracy as well as the rapidity of the method, we consider both autofocus and depth estimation as visual servoing paradigms. The major flexibility of the method lies in its ability to compute the focus position and the depth using only the acquired image information, i.e., sharpness. The feasibility of the method is shown by performing various ground truth experiments: autofocus achievements, depth estimation, focus-based nanomanipulator depth control, and sample topographic estimation at different scenarios inside the vacuum chamber of a tungsten gun SEM. The obtained results demonstrate the accuracy, rapidity, and efficiency of the developed method.