Abdul Rahman Hafiz
University of Fukui
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abdul Rahman Hafiz.
international conference on neural information processing | 2011
Abdul Rahman Hafiz; Md. Faijul Amin; Kazuyuki Murase
Computer vision system is one of the newest approaches for human computer interaction. Recently, the direct use of our hands as natural input devices has shown promising progress. Toward this progress, we introduce a hand gesture recognition system in this study to recognize real time gesture in unconstrained environments. The system consists of three components: real time hand tracking, hand-tree construction, and hand gesture recognition. Our main contribution includes: (1) a simple way to represent the hand gesture after applying thinning algorithm to the image, and (2) using a model of complex-valued neural network (CVNN) for real-valued classification. We have tested our system to 26 different gestures to evaluate the effectiveness of our approach. The results show that the classification ability of single-layered CVNN on unseen data is comparable to the conventional real-valued neural network (RVNN) having one hidden layer. Moreover, convergence of the CVNN is much faster than that of the RVNN in most cases.
Neural Processing Letters | 2015
Abdul Rahman Hafiz; Ahmed Yarub H. Al-Nuaimi; Md. Faijul Amin; Kazuyuki Murase
Complex-valued neural networks (CVNNs), that allow processing complex-valued data directly, have been applied to a number of practical applications, especially in signal and image processing. In this paper, we apply CVNN as a classification algorithm for the skeletal wireframe data that are generated from hand gestures. A CVNN having one hidden layer that maps complex-valued input to real-valued output was used, a training algorithm based on Levenberg Marquardt algorithm (CLMA) was derived, and a task to recognize 26 different gestures that represent English alphabet was given. The initial image processing part consists of three modules: real-time hand tracking, hand-skeletal construction, and hand gesture recognition. We have achieved; (1) efficient and accurate gesture extraction and representation in complex domain, (2) training of the CVNN utilising CLMA, and (3) providing a proof of the superiority of the aforementioned methods by utilising complex-valued learning vector quantization. A comparison with real-valued neural network shows that a CVNN with CLMA provides higher recognition performance, accompanied by significantly faster training. Moreover, a comparison of six different activation functions was performed and their utility is argued.
international symposium on neural networks | 2012
Abdul Rahman Hafiz; Faijul Amin; Kazuyuki Murase
With the advancement in technology, we see that complex-valued data arise in many practical applications, specially in signal and image processing. In this paper, we introduce a new application by generating complex-valued dataset that represents various hand gestures in complex domain. The system consists of three components: real time hand tracking, hand-skeleton construction, and hand gesture recognition. A complex-valued neural network (CVNN) having one hidden layer and trained with Complex Levenberg-Marquardt (CLM) algorithm has been used to recognize 26 different gestures that represents English Alphabet. The result shows that the CLM provides reasonable recognition performance. In addition to that, a comparison among different activation functions have been presented.
international symposium on neural networks | 2008
Fady Alnajjar; Abdul Rahman Hafiz; I. Bin Mohd Zin; Kazuyuki Murase
Based on indications from the neuroscience and psychology, both perception and action can be internally simulated by activating sensor and motor areas in the brain without external sensory input or without any resulting overt behavior. This hypothesis, however, can be highly useful in the real robot applications. The robot, for instance, can cover some of the corrupted sensory inputs by replacing them with its internal simulation. The accuracy of this hypothesis is strongly based on the agentpsilas experiences. As much as the agent knows about the environment, as much as it can build a strong internal representation about it. Although many works have been presented regarding to this hypothesis with various levels of success. At the sensorimotor abstraction level, where extracting data from the environment occur, however, none of them have so far used the robotpsilas vision as a sensory input. In this study, vision-sensorimotor abstraction is presented through memory-based learning in a real mobile robot ldquoHemissonrdquo to investigate the possibilities of explaining its inner world based on internal simulation of perception and action at the abstract level. The analysis of the experiments illustrate that our robot with vision sensory input has developed some kind of simple associations or anticipation mechanism through interacting with the environment, which enables, based on its history and the present situation, to guide its behavior in the absence of any external interaction.
Archive | 2012
Abdul Rahman Hafiz; Kazuyuki Murase
This paper introduces an autonomous camera-equipped robot platform for active vision research and as an education tool. Due to recent progress in electronics and computing power, in control and agent technology, and in computer vision and machine learning, the realization of an autonomous robots platform capable of solving high-level deliberate tasks in natural environments can be achieved. We used iPhone 4 technologies with Lego NXT to build a mobile robot called the iRov. iRov is a desk-top size robot that can perform image processing onboard utilizing the A4 chip which is a System-on-a-Chip (SoC) in the iPhone 4. With the CPU and the GPU processors working in parallel, we demonstrate real-time filters and 3D object recognition. Using this platform, the processing speed was 10 times faster than using the CPU alone.
Journal of Robotics | 2011
Abdul Rahman Hafiz; Fady Alnajjar; Kazuyuki Murase
Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robots vision system, which can enhance the robots real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robots vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robots tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.
international conference on neural information processing | 2009
Fady Alnajjar; Abdul Rahman Hafiz; Indra Bin Mohd Zin; Kazuyuki Murase
Based on indications from neuroscience and psychology, both perception and action can be internally simulated in organisms by activating sensory and/or motor areas in the brain without actual sensory input and/or without any resulting behavior. This phenomenon is usually used by the organisms to cope with missing external inputs. Applying such a phenomenon in a real robot recently has taken the attention of many researchers. Although some work has been reported on this issue, none of it has so far considered the potential of the robots vision at the sensorimotor abstraction level, where extracting data from the environment to build the internal representation takes place. In this study, a novel vision-motor abstraction is presented into a physically robot through a memory-based learning algorithm. Experimental results indicate that our robot with its vision could develop a simple anticipation mechanism in its memory from the interacting with the environment. This mechanism could guide the robot behavior in the absence of external inputs.
international conference on robot, vision and signal processing | 2013
Abdul Rahman Hafiz; Kazuyuki Murase
3D robotic vision is proposed using a neural network model that forms sparse distributed memory traces of spatiotemporal episodes of an object. These episodes are generated by the robot interaction with the environment or by robots movement around 3D object and its perspective to the objects. The traces are distributed in each cell and synapse that participates in many traces. This sharing of representational substrate enables the model for similarity based generalization and thus semantic memory. The results are provided showing that spatiotemporal patterns map to similar traces, as a first step for robot 3D vision system. The model achieves this property by measuring the degree of similarity between the current input pattern on each frame and the expected input given the preceding frame and then adding an amount of noise, inversely proportional to the degree of similarity, to the process of choosing the internal representation for the current frame and the predictable input given the preceding frame.
world automation congress | 2010
Fady Alnajjar; Abdul Rahman Hafiz; Kazuyuki Murase
Recently, there has been a growing attention to develop a humanoid robot controller that hopes to move robots closer to real world applications. Several approaches have been proposed to support the learning phase at such a controller, where the robot can gain new knowledge via observation and\or a direct guidance from a human or even another robot. These approaches, however, desire dynamic learning and memorizing techniques, in which, the robot can keep reforming its internal system overtime. Along this line of research, this work therefore, investigates an idea inspired from some assumptions in neuroscience to develop an incremental learning and memory model, we named, a Hierarchical Constructive BackPropagation with Memory (HCBPM). The validity of the model was tested in teaching a humanoid robot a group of names through a natural interaction with human. The experimental results indicate that the robot, with a kind of social learning environment, could reform its own memory, learn different color names, and retrieve these data to teach another user what it had learned.
international conference on neural information processing | 2009
Fady Alnajjar; Abdul Rahman Hafiz; Kazuyuki Murase
In recent years, there has been a growing attention to develop a Human-like Robot controller that hopes to move the robots closer to face real world applications. Several approaches have been proposed to support the learning phase in such a controller, such as learning through observation and\or a direct guidance from the user. These approaches, however, require incremental learning and memorizing techniques, where the robot can design its internal system and keep retraining it overtime. This study, therefore, investigates a new idea to develop incremental learning and memory model, we called, a Hierarchical Constructive BackPropagation with Memory (HCBPM). The validity of the model was tested in teaching a robot a group of names (colors). The experimental results indicate the efficiency of the model to build a social learning environment between the user and the robot. The robot could learn various color names and its different phases, and retrieve these data easily to teach another user what it had learned.