Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. H. Abdul Hafez is active.

Publication


Featured researches published by A. H. Abdul Hafez.


international conference on robotics and automation | 2007

Visual Servoing by Optimization of a 2D/3D Hybrid Objective Function

A. H. Abdul Hafez; C. V. Jawahar

In this paper, we present a new hybrid visual servoing algorithm for robot arm positioning task. Hybrid methods in visual servoing partially combine the 2D and 3D visual information to improve the performance of the traditional image-based and position-based visual servoing. Our algorithm is superior to the state of the art hybrid methods. The objective function has been designed to include the full 2D and 3D information available either from the CAD model or from the partial reconstruction process by decomposing the homography matrix between two views. Here, each of 2D and 3D error functions is used to control the six degrees of freedom. We call this method 5D visual servoing. The positioning task has been formulated as a minimization problem. Gradient decent as a first order approximation and Gauss-Newton as a second order approximation are considered in this paper. Simulation results show that these two methods provide an efficient solution to the camera retreat and features visibility problems. The camera trajectory in the Cartesian space is also shown to be satisfactory.


international conference on robotics and automation | 2008

Visual servoing based on Gaussian mixture models

A. H. Abdul Hafez; Supreeth Achar; C. V. Jawahar

In this paper we present a novel approach to robust visual servoing. This method removes the feature tracking step from a typical visual servoing algorithm. We do not need correspondences of the features for deriving the control signal. This is achieved by modeling the image features as a mixture of Gaussians in the current as well as desired images. Using Lyapunov theory, a control signal is derived to minimize a distance function between the two Gaussian mixtures. The distance function is given in a closed form, and its gradient can be efficiently computed and used to control the system. For simplicity, we first consider the 2D motion case. Then, the general case is presented by introducing the depth distribution of the features to control the six degrees of freedom. Experiments are conducted within a simulation framework to validate our proposed method.


international conference on robotics and automation | 2014

Reactionless visual servoing of a dual-arm space robot

A. H. Abdul Hafez; V. V. Anurag; Suril V. Shah; K. Madhava Krishna; C. V. Jawahar

This paper presents a novel visual servoing controller for a satellite mounted dual-arm space robot. The controller is designed to complete the task of servoing the robots endeffectors to the desired pose, while regulating orientation of the base-satellite. Task redundancy approach is utilized to coordinate the servoing process and attitude of the base satellite. The visual task is defined as a primary task, while regulating attitude of the base satellite to zero is defined as a secondary task. The secondary task is formulated as an optimization problem in such a way that it does not affect the primary task, and simultaneously minimizes its cost function. A set of numerical experiments are carried out on a dual-arm space robot showing efficacy of the proposed control methodology.


intelligent robots and systems | 2007

Path planning approach to visual servoing with feature visibility constraints: A convex optimization based solution

A. H. Abdul Hafez; Anil Kumar Nelakanti; C. V. Jawahar

This paper explores the possibility of using convex optimization to address a class of problems in visual servoing. This work is motivated by the recent success of convex optimization methods in solving geometric inference problems in computer vision. We formulate the visual servoing problem with feature visibility constraints as a convex optimization of a function of the camera position i.e. the translation of the camera. First, the path is planned using potential field method that produces unconstrained but straight line path from the initial to the desired camera position. The problem is then converted to a constrained convex optimization problem by introducing the visibility constraints to the minimization problem. The objective of the minimization process is to find for each camera position the closest alternate position from which all features are visible. This algorithm ensures that the solution is optimal. This formulation allows the introduction of more constraints, like joint limits of the arm, into the visual servoing process. The results have been illustrated in a simulation framework.


intelligent robots and systems | 2006

Integration Framework for Improved Visual Servoing in Image and Cartesian Spaces

A. H. Abdul Hafez; C. V. Jawahar

In this paper, we present a new integration method for improving the performance of visual servoing. The method integrates both image-based visual servoing (IBVS) and position-based visual servoing (PBVS) to satisfy the requirements of the visual servoing process. We define a probabilistic integration rule for IBVS and PBVS controllers. Density functions that determine the probability of each controller are defined to satisfy the above constraints. We prove that this integration method provides global stability, and avoids local minima. The new integration method is validated on positioning tasks, and compared with other switching methods


intelligent robots and systems | 2013

Visual localization in highly crowded urban environments

A. H. Abdul Hafez; Manpreet Singh; K. Madhava Krishna; C. V. Jawahar

Visual localization in crowded dynamic environments requires information about static and dynamic objects. This paper presents a robust method that learns the useful features from multiple runs in highly crowded urban environments. Useful features are identified as distinctive ones that are also reliable to extract in diverse imaging conditions. Relative importance of features is used to derive the weight for each feature. The popular Bag-of-words model is used for image retrieval and localization, where query image is the current view of the environment and database contains the visual experience from previous runs. Based on the reliability, features are augmented and eliminated over runs. This reduces the size of representation, and makes it more reliable in crowded scenes. We tested the proposed method on data sets collected from highly crowded Indian urban outdoor settings. Experiments have shown that with the help of a small subset (10%) of the detected features, we can reliably localize the camera. We achieve superior results in terms of localization accuracy even when more than 90% of the pixels are occluded or dynamic.


intelligent robots and systems | 2013

Learning support order for manipulation in clutter

Swagatika Panda; A. H. Abdul Hafez; C. V. Jawahar

Understanding positional semantics of the environment plays an important role in manipulating an object in clutter. The interaction with surrounding objects in the environment must be considered in order to perform the task without causing the objects fall or get damaged. In this paper, we learn the semantics in terms of support relationship among different objects in a cluttered environment by utilizing various photometric and geometric properties of the scene. To manipulate an object of interest, we use the inferred support relationship to derive a sequence in which its surrounding objects should be removed while causing minimal damage to the environment. We believe, this work can push the boundary of robotic applications in grasping, object manipulation and picking-from-bin, towards objects of generic shape and size and scenarios with physical contact and overlap. We have created an RGBD dataset that consists of various objects used in day-to-day life present in clutter. We explore many different settings involving different kind of object-object interaction. We successfully learn support relationships and predict support order in these settings.


international conference on pattern recognition | 2006

Target Model Estimation using Particle Filters for Visual Servoing

A. H. Abdul Hafez; C. V. Jawahar

In this paper, we present a novel method for model estimation for visual servoing. This method employs a particle filter algorithm to estimate the depth of the image features online. A Gaussian probabilistic model is employed to model the object points in the current camera frame. A set of 3D samples drawn from the model is projected into the image space in the next frame. The 3D sample that maximizes the likelihood is considered to be the most probable real-world 3D point. The variance value of the depth density function converges to very small value within a few iterations. Results show accurate estimate of the depth/model and a high level of stability in the visual servoing process


international conference on control, automation, robotics and vision | 2006

Probabilistic Integration of 2D and 3D Cues for Visual Servoing

A. H. Abdul Hafez; C. V. Jawahar

In this paper we present a new integration method for improving the performance of visual servoing. The method integrates image-based visual servoing (IBVS) and position-based visual servoing (PBVS) approaches to satisfy the widely varying requirements of the visual servoing process. We define an integration rule for IBVS and PBVS controllers. Density functions that determine the weighting factor of each controller are defined to satisfy the above constraints. We prove that this integration method provides global stability, and avoids local minima. The new integration method is validated on positioning tasks and compared with other switching methods


international conference on advanced robotics | 2015

Switching method to avoid algorithmic singularity in vision-based control of a space robot

Suril V. Shah; V. V. Anurag; A. H. Abdul Hafez; K. Madhava Krishna

This paper presents a novel approach for algorithmic singularity avoidance for reactionless visual servoing of a satellite mounted space robot. Task priority approach is used to perform visual servoing while reactionless manipulation of the space robot. Algorithmic singularity is prominent in such cases of prioritizing two tasks. The algorithmic singularity is different from the kinematic and dynamic singularities as the former is an artefact of the tasks at hand, and difficult to predict. In this paper, we present a geometric interpretation of its occurrence, and propose a method to avoid it. The method involves path planning in image space, and generates a sequence of images that guides the robot towards goal avoiding algorithmic singularity. The method is illustrated through numerical studies on a 6-DOF planar dual-arm robot mounted on a service satellite.

Collaboration


Dive into the A. H. Abdul Hafez's collaboration.

Top Co-Authors

Avatar

C. V. Jawahar

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

K. Madhava Krishna

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Suril V. Shah

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Swagatika Panda

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

V. V. Anurag

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Anil Kumar Nelakanti

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

P. Mithun

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Arun Agarwal

University of Hyderabad

View shared research outputs
Top Co-Authors

Avatar

Manpreet Arora

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rachit Bhargava

International Institute of Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge