Alaa M. Khamis
Suez University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alaa M. Khamis.
Information Fusion | 2013
Bahador Khaleghi; Alaa M. Khamis; Fakhreddine Karray; Saiedeh Razavi
There has been an ever-increasing interest in multi-disciplinary research on multisensor data fusion technology, driven by its versatility and diverse areas of application. Therefore, there seems to be a real need for an analytical review of recent developments in the data fusion domain. This paper proposes a comprehensive review of the data fusion state of the art, exploring its conceptualizations, benefits, and challenging aspects, as well as existing methodologies. In addition, several future directions of research in the data fusion community are highlighted and described.
robotics, automation and mechatronics | 2006
Miguel Angel Salichs; R. Barber; Alaa M. Khamis; María Malfaz; Javier F. Gorostiza; Rakel Pacheco; Rafael Rivas; Ana Corrales; Elena Delgado; David Garcia
Human-robot social interaction plays an important role in spreading the use of the robot in human daily life. Through effective social interaction, robots will be able to perform many tasks in the human society. These tasks may include, but not limited to, handling various house duties, providing medical care for elderly people, assisting people with motor or cognitive disabilities, educational entertainment (edutainment), personal assistance, giving directions at information points in public places, etc. These applications need to develop social robots that are able to behave with humans as partners if not peers. This paper presents Maggie, a robotic platform developed at RoboticsLab for research on human-robot social interaction. The different developed interaction modules are also described
robot and human interactive communication | 2006
Javi F. Gorostiza; R. Barber; Alaa M. Khamis; María Malfaz; Rakel Pacheco; Rafael Rivas; Ana Corrales; Elena Delgado; Miguel Angel Salichs
This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction research. The control architecture of this personal robot is a hybrid control architecture called AD (automatic-deliberative) that incorporates an emotion control system (ECS). Maggies main goal is to interact establish a peer-to-peer relationship with humans. To achieve this goal, a set of human-robot interaction skills are developed based on the proposed framework. The human-robot interaction skills imply tactile, visual, remote voice and sound modes. The multi-modal fusion and synchronization are also presented in this paper
Advances in Social Media Analysis | 2015
Alaa M. Khamis; Ahmed Hussein; Ahmed M. Elmogy
Multi-robot systems (MRS) are a group of robots that are designed aiming to perform some collective behavior. By this collective behavior, some goals that are impossible for a single robot to achieve become feasible and attainable. There are several foreseen benefits of MRS compared to single robot systems such as the increased ability to resolve task complexity, increasing performance, reliability and simplicity in design. These benefits have attracted many researchers from academia and industry to investigate how to design and develop robust versatile MRS by solving a number of challenging problems such as complex task allocation, group formation, cooperative object detection and tracking, communication relaying and self-organization to name just a few. One of the most challenging problems of MRS is how to optimally assign a set of robots to a set of tasks in such a way that optimizes the overall system performance subject to a set of constraints. This problem is known as Multi-robot Task Allocation (MRTA) problem. MRTA is a complex problem especially when it comes to heterogeneous unreliable robots equipped with different capabilities that are required to perform various tasks with different requirements and constraints in an optimal way. This chapter provides a comprehensive review on challenging aspects of MRTA problem, recent approaches to tackle this problem and the future directions.
international conference on multisensor fusion and integration for intelligent systems | 2010
Bahador Khaleghi; Alaa M. Khamis; Fakhreddine Karray
This paper reports on ongoing research on development of data fusion systems capable of processing soft as well as hard data. Such fusion systems are distinguished from the conventional systems where input data are assumed to be provided by typically well-characterized electronic sensor systems. The incorporation of soft human-generated data into fusion process is an emerging trend in fusion community majorly motivated by asymmetric warfare situations where observational opportunities for traditional hard sensors is restricted. Random finite set theory is a mathematical framework with powerful representational and computational abilities making it a promising approach to address several fundamental challenges in soft/hard fusion systems. In this paper the first prototype soft/hard fusion system based on random finite set theory is described. Experimental results obtained using the developed system prove the plausibility as well as efficiency of a random finite set theoretic approach to fusion of soft/hard data.
Autonomous Robots | 2003
Alaa M. Khamis; Francisco José Rodríguez; Miguel Angel Salichs
This paper describes an architecture, which can be used to build remote laboratories to interact remotely via Internet with mobile robots using different interaction devices. A supervisory control strategy has been used to develop the remote laboratory in order to alleviate high communication data rates and system sensitivity to network delays. The users interact with the remote system at a more abstract level using high level commands. The local robots autonomy has been increased by encapsulating all the robots behaviors in different types of skills. User interfaces have been designed using visual proxy pattern to facilitate any future extension or code reuse. The developed remote laboratory has been integrated into an educational environment in the field of indoor mobile robotics. This environment is currently being used as a part of an international project to develop a distributed laboratory for autonomous and teleoperated systems (IECAT, 2003).
Journal of Intelligent and Robotic Systems | 2011
Alaa M. Khamis; Ahmed M. Elmogy; Fakhri Karray
In mobile surveillance systems, complex task allocation addresses how to optimally assign a set of surveillance tasks to a set of mobile sensing agents to maximize overall expected performance, taking into account the priorities of the tasks and the skill ratings of the mobile sensors. This paper presents a market-based approach to complex task allocation. Complex tasks are the tasks that can be decomposed into subtasks. Both centralized and hierarchical allocations are investigated as winner determination strategies for different levels of allocation and for static and dynamic search tree structures. The objective comparison results show that hierarchical dynamic tree task allocation outperforms all the other techniques especially in complex surveillance operations where large number of robots is used to scan large number of areas.
international conference on robotics and automation | 2003
Alaa M. Khamis; D.M. Rivero; Francisco José Rodríguez; Miguel Angel Salichs
The building of remote laboratories for laboratory experiments in mobile robots requires expertise in a number of different disciplines, such as Internet programming, telematic and mechatronic systems, etc. Remote laboratories offer students access to complementary experiments, not available at their own university, as support to lectures. An intuitive user interface is required for inexperienced people to control the robot remotely. This paper describes a design pattern-based architecture to build remote laboratories for mobile robotics. The proposed remote laboratory is currently used to provide remote experiments on indoor mobile robotics, addressing different approaches to solve the main problems of mobile robotics, such as sensing, motion control, localization, world modeling, planning, etc. These experiments are being used in several mobile robotics and autonomous systems courses, at the undergraduate and graduate levels.
Expert Systems With Applications | 2016
Mohamed Lamine Mekhalfi; Farid Melgani; Abdallah Zeggada; Francesco G. B. De Natale; Mohammed A.-M. Salem; Alaa M. Khamis
This paper presents a prototype to assist blind people in indoor environments.The prototype incorporates recognition and guidance units.It comprises also a voice-user interface.Tests in a public indoor space demonstrate promising capabilities. Assistive technologies for blind people are showing a fast growth, providing useful tools to support daily activities and to improve social inclusion. Most of these technologies are mainly focused on helping blind people to navigate and avoid obstacles. Other works emphasize on providing them assistance to recognize their surrounding objects. Very few of them however couple both aspects (i.e., navigation and recognition). With the aim to address the aforesaid needs, we describe in this paper an innovative prototype, which offers the capabilities to (i) move autonomously and to (ii) recognize multiple objects in public indoor environments. It incorporates lightweight hardware components (camera, IMU, and laser sensors), all mounted on a reasonably-sized integrated device to be placed on the chest. It requires the indoor environment to be blind-friendly, i.e., prior information about it should be prepared and loaded in the system beforehand. Its algorithms are mainly based on advanced computer vision and machine learning approaches. The interaction between the user and the system is performed through speech recognition and synthesis modules. The prototype offers to the user the possibility to (i) walk across the site to reach the desired destination, avoiding static and mobile obstacles, and (ii) ask the system through vocal interaction to list the prominent objects in the users field of view. We illustrate the performances of the proposed prototype through experiments conducted in a blind-friendly indoor space equipped at our Department premises.
international conference on signals circuits and systems | 2009
Yun-Qian Miao; Alaa M. Khamis; Mohamed S. Kamel
This paper reviews coordinated motion control strategies of mobile sensors in mobile surveillance systems. Mobile surveillance systems include a vast array of mobile sensing nodes with varying sensing modalities that can sense continuously the volume of interest. These distributed nodes are capable of sensing, processing, mobilization and communication with other nodes. One of the fundamental problems of mobile surveillance systems is how to coordinate these distributed nodes in such a way that they can move together in concert. Based on the nature of the surveillance task, three coordinated motion control strategies are described. These strategies are direct control, intentional control and emergent control.