Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fadi Dornaika is active.

Publication


Featured researches published by Fadi Dornaika.


international conference on robotics and automation | 1998

Simultaneous robot-world and hand-eye calibration

Fadi Dornaika; Radu Horaud

Zhuang et al. (1994) proposed a method that allows simultaneous computation of the rigid transformations from world frame to robot base frame and from hand frame to camera frame. Their method attempts to solve a homogeneous matrix equation of the for, AX=ZB. They use quaternions to derive explicit linear solution for X and Z. In this paper, we present two new solutions that attempt to solve the homogeneous matrix equation mentioned above: 1) a closed-form method which uses quaternion algebra and a positive quadratic error function associated with this representation; 2) a method based on nonlinear constrained minimization and which simultaneously solves for rotations and translations. These results may be useful to other problems that can be formulated in the same mathematical form. We perform a sensitivity analysis for both our two methods and the linear method developed by Zhuang et al. This analysis allows the comparison of the three methods. In the light of this comparison, the nonlinear optimization method, which solves for rotations and translations simultaneously, seems to be the most stable one with respect to noise and to measurement errors.


international conference on robotics and automation | 1998

Visually guided object grasping

Radu Horaud; Fadi Dornaika; Bernard Espiau

We present a visual serving approach to the problem of object grasping and more generally, to the problem of aligning an end-effector with an object. First, we extend the method proposed by Espiau et al. (1992) to the case of a camera which is not mounted onto the robot being controlled, and we stress the importance of the real-time estimation of the image Jacobian. Next, we show how to represent a grasp or more generally, an alignment between two solids in 3D projective space using an uncalibrated stereo rig. Such a 3D projective representation is view-invariant in the sense that it can be easily mapped into an image set-point without any knowledge about the camera parameters. Finally, we perform an analysis of the performances of the visual servoing algorithm and of the grasping precision that can be expected from this type of approach.


systems man and cybernetics | 2004

Fast and reliable active appearance model search for 3-D face tracking

Fadi Dornaika; Jörgen Ahlberg

This paper addresses the three-dimensional (3D) tracking of pose and animation of the human face in monocular image sequences using active appearance models. The major problem of the classical appearance-based adaptation is the high computational time resulting from the inclusion of a synthesis step in the iterative optimization. Whenever the dimension of the face space is large, a real-time performance cannot be achieved. In this paper, we aim at designing a fast and stable active appearance model search for 3D face tracking. The main contribution is a search algorithm whose CPU-time is not dependent on the dimension of the face space. Using this algorithm, we show that both the CPU-time and the likelihood of a nonaccurate tracking are reduced. Experiments evaluating the effectiveness of the proposed algorithm are reported, as well as method comparison and tracking synthetic and real image sequences.


IEEE Transactions on Circuits and Systems for Video Technology | 2006

On Appearance Based Face and Facial Action Tracking

Fadi Dornaika; Franck Davoine

In this work, we address the problem of tracking faces and facial actions in a single video sequence. The main contributions of the paper are as follows. First, we develop a particle filter based framework for tracking the global 3-D motion of a face using a statistical facial appearance model. Second, we propose a framework for tracking the 3-D face pose as well as the local motion of inner features of the face due for instance to spontaneous facial actions, using an adaptive appearance model. We allow the statistics of the facial appearance as well as the dynamics to be adaptively updated during tracking. Third, we propose a variant of the second framework based on a heuristic search. Tracking real video sequences demonstrated the effectiveness of the developed methods. Accurate tracking was obtained even in the presence of perturbing factors including significant head pose and facial expression variations, occlusions, and illumination changes


computer vision and pattern recognition | 2004

Head and Facial Animation Tracking using Appearance-Adaptive Models and Particle Filters

Fadi Dornaika; Franck Davoine

This paper introduces two frameworks for head and facial animation tracking. The first framework introduces a particle-filter tracker capable of tracking the 3D head pose using a statistical facial texture model. The second framework introduces an appearance-adaptive tracker capable of tracking the 3D head pose and the facial animations in real-time. This framework has the merits of both deterministic and stochastic approaches. It consists of an online adaptive observation model of the face texture together with an adaptive transition motion model. The latter is based on a registration technique between the appearance model and the incoming observation. The second framework extends the concept of Online Appearance Models to the case of tracking 3D non-rigid face motion (3D head pose and facial animations). Tracking long video sequences demonstrated the effectiveness of the developed methods. Accurate tracking was obtained even in the presence of perturbing factors such as illumination changes, significant head pose and facial expression variations as well as occlusions.


Pattern Recognition | 2012

A supervised non-linear dimensionality reduction approach for manifold learning

Bogdan Raducanu; Fadi Dornaika

In this paper we introduce a novel supervised manifold learning technique called Supervised Laplacian Eigenmaps (S-LE), which makes use of class label information to guide the procedure of non-linear dimensionality reduction by adopting the large margin concept. The graph Laplacian is split into two components: within-class graph and between-class graph to better characterize the discriminant property of the data. Our approach has two important characteristics: (i) it adaptively estimates the local neighborhood surrounding each sample based on data density and similarity and (ii) the objective function simultaneously maximizes the local margin between heterogeneous samples and pushes the homogeneous samples closer to each other. Our approach has been tested on several challenging face databases and it has been conveniently compared with other linear and non-linear techniques, demonstrating its superiority. Although we have concentrated in this paper on the face recognition problem, the proposed approach could also be applied to other category of objects characterized by large variations in their appearance (such as hand or body pose, for instance).


international conference on computer vision | 2005

Simultaneous facial action tracking and expression recognition using a particle filter

Fadi Dornaika; Franck Davoine

The recognition of facial gestures and expressions in image sequences is an important and challenging problem. Most of the existing methods adopt the following paradigm. First, facial actions/features are retrieved from the images, and then facial expressions are recognized based on the retrieved temporal parameters. Unlike this main strewn, this paper introduces a new approach allowing the simultaneous recovery of facial actions and expression using a particle filter adopting, multiclass dynamics that are conditioned on the expression. For each frame in the video sequence, our approach is split in two consecutive stages. In the first stage, the 3D head pose is recovered using a deterministic registration technique based on online appearance models. In the second stage, the facial actions as well as the facial expression are simultaneously recovered using the stochastic framework with, mixed states. The proposed fast scheme is either as robust as existing ones or more robust with respect to many regards. Experimental results show the feasibility and robustness of the proposed approach


Real-time Imaging | 1999

Pose Estimation using Point and Line Correspondences

Fadi Dornaika; Christophe Garcia

The problem of a real-time pose estimation between a 3D scene and a single camera is a fundamental task in most 3D computer vision and robotics applications such as object tracking, visual servoing, and virtual reality. In this paper we present two fast methods for estimating the 3D pose using 2D to 3D point and line correspondences. The first method is based on the iterative use of a weak perspective camera model and forms a generalization of DeMenthons method (1995) which consists of determining the pose from point correspondences. In this method the pose is iteratively improved with a weak perspective camera model and at convergence the computed pose corresponds to the perspective camera model. The second method is based on the iterative use of a paraperspective camera model which is a first order approximation of perspective. We describe in detail these two methods for both non-planar and planar objects. Experiments involving synthetic data as well as real range data indicate the feasibility and robustness of these two methods. We analyse the convergence of these methods and we conclude that the iterative paraperspective method has better convergence properties than the iterative weak perspective method. We also introduce a non-linear optimization method for solving the pose problem.


IEEE Transactions on Intelligent Transportation Systems | 2008

An Efficient Approach to Onboard Stereo Vision System Pose Estimation

Angel Domingo Sappa; Fadi Dornaika; Daniel Ponsa; David Gerónimo; Antonio M. López

This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environments dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2D representation of the original 3D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driver-assistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and cars accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results.


Image and Vision Computing | 2006

Fitting 3D face models for tracking and active appearance model training

Fadi Dornaika; Jörgen Ahlberg

In this paper, we consider fitting a 3D deformable face model to continuous video sequences for the tasks of tracking and training. We propose two appearance-based methods that only require a simple statistical facial texture model and do not require any information about an empirical or analytical gradient matrix, since the best search directions are estimated on the fly. The first method computes the fitting using a locally exhaustive and directed search where the 3D head pose and the facial actions are simultaneously estimated. The second method decouples the estimation of these parameters. It computes the 3D head pose using a robust feature-based pose estimator incorporating a facial texture consistency measure. Then, it estimates the facial actions with an exhaustive and directed search. Fitting and tracking experiments demonstrate the feasibility and usefulness of the developed methods. A performance evaluation also shows that the proposed methods can outperform the fitting based on an active appearance model search adopting a pre-computed gradient matrix. Although the proposed schemes are not as fast as the schemes adopting a directed continuous search, they can tackle many disadvantages associated with such approaches.

Collaboration


Dive into the Fadi Dornaika's collaboration.

Top Co-Authors

Avatar

Bogdan Raducanu

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angel Domingo Sappa

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alireza Bosaghzadeh

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Yassine Ruichek

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Abdelmalik Moujahid

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Franck Davoine

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Antonio M. López

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Y. El Traboulsi

University of the Basque Country

View shared research outputs
Researchain Logo
Decentralizing Knowledge