Samer Abdallah
Australian National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samer Abdallah.
intelligent robots and systems | 2001
Chanop Silpa-Anan; Thomas S. Brinsmead; Samer Abdallah; Alexander Zelinsky
We consider a visually-guided autonomous underwater vehicle. We develop a position-based visual servo control of fixed and slow moving targets using visual position feedback and sensor-based orientation feedback. The visual position feedback is implemented on a stereo camera system. We use a compass and an inclinometer for orientation feedback. We also implement a computed torque controller, using Euler parameters to represent the orientation state, for the vehicle motion control. Using Euler parameters eliminates singularities in the model and the controller. Preliminary experimental results of visual servo control are reported.
field and service robotics | 1998
Andrew J. Brooks; Glenn N. Dickins; Alexander Zelinsky; Jon Kieffer; Samer Abdallah
While active vision is a relatively new approach to computer vision, it offers impressive computational benefits for scene analysis in realistic environments. This paper describes a novel camera platform for the real-time, real-world application of active vision in robots. Requirements for performance are presented as are the figures actually achieved, along with an alternative, task-based method of specifying active visual system abilities. Details of the platform’s cable drive transmission mechanism are provided as well as the advantages given by this scheme. Finally, research directions involving this platform are discussed.
asian conference on computer vision | 1998
Samer Abdallah; Eduardo Mario Nebot; David C. Rye
The magnitude of the Zernike moments has been successfully used in previous works [1,9,13,19] as rotation invariant feature for object recognition. The phase angle of the Zernike moments however, was ignored in the past. In this paper, this phase angle is used to determine the orientation of the identified object. Furthermore, it is demonstrated that saving of 65% on computation time of the Zernike moments can be achieved by using the explicit forms of their radial polynomials. The effectiveness of the approach is illustrated by tests on real images of industrial objects.
Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision | 2000
Edwige Pissaloux; Hichem Bouayed; Samer Abdallah
This paper addresses design principles and definition of a non-invasive vision system assisting blinds in their displacements in non-cooperating environment. The global environment perception and understanding is a key problem for such systems. The optical vision is one of sense providing the global information on nearest environment; its association with convenient global environment representation transmitted to blinds via tactile interface allows blinds to locate all moving and fixed obstacles. Adding vision sensor to electronic travel aid (ETA) for blinds increases blinds autonomy, helps them to orient and move in safety in 3D environment. The announced principles for vison-based ETA can be easily transposed to any robotics, and especially humanoids, vison system.
intelligent robots and systems | 1998
Andrew J. Brooks; Samer Abdallah; Alexander Zelinsky; Jon Kieffer
The increase in hardware performance versus component cost has brought vision firmly into the realm of practical robotic sensors. We present an overview of an integrated research program to create a general-purpose robotic vision system from an expansive rather than linear perspective, incorporating a multimodal approach to real-time visual interaction with the environment. Design and construction of a high-performance active camera platform using primarily off-the-shelf components has proceeded in parallel with algorithm design and implementation towards a suite of low-level behaviours suitable for feeding quantised data to a multitude of visually guided tasks, from mobile robot navigation and visual servoing to human-robot interaction and instruction. We describe these initiatives, including performance specifications and experimental results obtained under real-world conditions.
international conference on robotics and automation | 2000
Harley Truong; Samer Abdallah; Sebastien Rougeaux; Alexander Zelinsky
international conference on robotics and automation | 2000
Matthew Bryant; David Wettergreen; Samer Abdallah; Alexander Zelinsky
international conference on robotics and automation | 2000
Orson Sutherland; Sebastien Rougeaux; Samer Abdallah; Alexander Zelinsky
international conference on robotics and automation | 2000
J Cvetanovski; Samer Abdallah; Thomas S. Brinsmead; David Wettergreen; Alexander Zelinsky
Archive | 2000
Hichem Bouayed; Samer Abdallah