Abd El Rahman Shabayek
University of Luxembourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abd El Rahman Shabayek.
Journal of Intelligent and Robotic Systems | 2012
Abd El Rahman Shabayek; Cédric Demonceaux; Olivier Morel; David Fofi
Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from uncalibrated catadioptric sensors are also discussed.
international conference on image analysis and recognition | 2016
Mohamed Elawady; Ibrahim Sadek; Abd El Rahman Shabayek; Gerard Pons; Sergi Ganau
Breast cancer is one of the leading causes of cancer death among women worldwide. The proposed approach comprises three steps as follows. Firstly, the image is preprocessed to remove speckle noise while preserving important features of the image. Three methods are investigated, i.e., Frost Filter, Detail Preserving Anisotropic Diffusion, and Probabilistic Patch-Based Filter. Secondly, Normalized Cut or Quick Shift is used to provide an initial segmentation map for breast lesions. Thirdly, a postprocessing step is proposed to select the correct region from a set of candidate regions. This approach is implemented on a dataset containing 20 B-mode ultrasound images, acquired from UDIAT Diagnostic Center of Sabadell, Spain. The overall system performance is determined against the ground truth images. The best system performance is achieved through the following combinations: Frost Filter with Quick Shift, Detail Preserving Anisotropic Diffusion with Normalized Cut and Probabilistic Patch-Based with Normalized Cut.
IEEE Conf. on Intelligent Systems (2) | 2015
Mohamed Tahoun; Abd El Rahman Shabayek; Ralf Reulke; Aboul Ella Hassanien
Detection and matching of features from satellite images taken from different sensors, viewpoints, or at different times are important tasks when manipulating and processing remote sensing data for many applications. This paper presents a scheme for satellite image co-registration using invariant local features. Different corner and scale based feature detectors have been tested during the keypoint extraction, descriptor construction and matching processes. The framework suggests a sub-sampling process which controls the number of extracted key points for a real time processing and for minimizing the hardware requirements. After getting the pairwise matches between the input images, a full registration process is followed by applying bundle adjustment and image warping then compositing the registered version. Harris and GFTT have recorded good results with ASTER images while both with SURF give the most stable performance on optical images in terms of better inliers ratios and running time compared to the other detectors. SIFT detector has recorded the best inliers ratios on TerraSAR-X data while it still has a weak performance with other optical images like Rapid-Eye and ASTER.
international conference on neural information processing | 2017
Oyebade K. Oyedotun; Abd El Rahman Shabayek; Djamila Aouada; Björn E. Ottersten
Many works have posited the benefit of depth in deep networks. However, one of the problems encountered in the training of very deep networks is feature reuse; that is, features are ‘diluted’ as they are forward propagated through the model. Hence, later network layers receive less informative signals about the input data, consequently making training less effective. In this work, we address the problem of feature reuse by taking inspiration from an earlier work which employed residual learning for alleviating the problem of feature reuse. We propose a modification of residual learning for training very deep networks to realize improved generalization performance; for this, we allow stochastic shortcut connections of identity mappings from the input to hidden layers. We perform extensive experiments using the USPS and MNIST datasets. On the USPS dataset, we achieve an error rate of 2.69% without employing any form of data augmentation (or manipulation). On the MNIST dataset, we reach a comparable state-of-the-art error rate of 0.52%. Particularly, these results are achieved without employing any explicit regularization technique.
3DBODY.TECH 2017 - 8th International Conference and Exhibition on 3D Body Scanning and Processing Technologies, Montreal QC, Canada, 11-12 Oct. 2017 | 2017
Alexandre Saint; Abd El Rahman Shabayek; Djamila Aouada; Björn E. Ottersten; Kseniya Cherenkova; Gleb Gusev
This paper presents a method to automatically recover a realistic and accurate body shape of a person wearing clothing from a 3D scan. Indeed, in many practical situations, people are scanned wearing clothing. The underlying body shape is thus partially or completely occluded. Yet, it is very desirable to recover the shape of a covered body as it provides non-invasive means of measuring and analysing it. This is particularly convenient for patients in medical applications, customers in a retail shop, as well as in security applications where suspicious objects under clothing are to be detected. To recover the body shape from the 3D scan of a person in any pose, a human body model is usually fitted to the scan. Current methods rely on the manual placement of markers on the body to identify anatomical locations and guide the pose fitting. The markers are either physically placed on the body before scanning or placed in software as a postprocessing step. Some other methods detect key points on the scan using 3D feature descriptors to automate the placement of markers. They usually require a large database of 3D scans. We propose to automatically estimate the body pose of a person from a 3D mesh acquired by standard 3D body scanners, with or without texture. To fit a human model to the scan, we use joint locations as anchors. These are detected from multiple 2D views using a conventional body joint detector working on images. In contrast to existing approaches, the proposed method is fully automatic, and takes advantage of the robustness of state-of-art 2D joint detectors. The proposed approach is validated on scans of people in different poses wearing garments of various thicknesses and on scans of one person in multiple poses with known ground truth wearing close-fitting clothing.
Archive | 2016
Mohamed Tahoun; Abd El Rahman Shabayek; Hamed Nassar; Marcello Giovenco; Ralf Reulke; Eid Emary; Aboul Ella Hassanien
The rapid increasing of remote sensing (RS) data in many applications ignites a spark of interest in the process of satellite image matching and registration. These data are collected through remote sensors then processed and interpreted by means of image processing algorithms. They are taken from different sensors, viewpoints, or times for many industrial and governmental applications covering agriculture, forestry, urban and regional planning, geology, water resources, and others. In this chapter, a feature-based registration of optical and radar images from same and different sensors using invariant local features is presented. The registration process starts with the feature extraction and matching stages which are considered as key issues when processing remote sensing data from single or multi-sensors. Then, the geometric transformation models are applied followed by the interpolation method in order to get a final registered version. As a pre-processing step, speckle noise removal is performed on radar images in order to reduce the number of false detections. In a similar fashion, optical images are also processed by sharpening and enhancing edges in order to get more accurate detections. Different blob, corner and scale based feature detectors are tested on both optical and radar images. The list of tested detectors includes: SIFT, SURF, FAST, MSER, Harris, GFTT, ORB, BRISK and Star. In this work, five of these detectors compute their own descriptors (SIFT, SURF, ORB, BRISK, and BRIEF), while others use the steps involved in SIFT descriptor to compute the feature vectors describing the detected keypoints. A filtering process is proposed in order to control the number of extracted keypoints from high resolution satellite images for a real time processing. In this step, the keypoints or the ground control points (GCPs) are sorted according to the response strength measured based on their cornerness. A threshold value is chosen to control the extracted keypoints and finalize the extraction phase. Then, the pairwise matches between the input images are calculated by matching the corresponding feature vectors. Once the list of tie points is calculated, a full registration process is followed by applying different geometric transformations to perform the warping phase. Finally and once the transformation model estimation is done, it is followed by blending and compositing the registered version. The results included in this chapter showed a good performance for invariant local feature detectors. For example, SIFT, SURF, Harris, FAST and GFTT achieve better performance on optical images while SIFT gives also better results on radar images which suffer from speckle noise. Furthermore, through measuring the inliers ratios, repeatability, and robustness against noise, variety of comparisons have been done using different local feature detectors and descriptors in addition to evaluating the whole registration process. The tested optical and radar images are from RapidEye, Pleiades, TET-1, ASTER, IKONOS-2, and TerraSAR-X satellite sensors in different spatial resolutions, covering some areas in Australia, Egypt, and Germany.
international conference on telecommunications | 2015
Mohamed Tahoun; Abd El Rahman Shabayek; Aboul Ella Hassanien; Ralf Reulke
Satellite image matching is an important task when integrating information from different sensors, viewpoints or taken at different times. This paper presents an evaluation of local features on satellite images. Different blob, edge and corner based detectors have been tested against many variables and parameters. During the extraction process, the detected keypoints are sub-sampled in order to solve the high hardware requirements and the processing time problems with high resolution satellite data. The matching process showed variant performance for the detectors where SURF, FAST, GFTT and Harris detectors have recorded the best matching performance on optical images in terms of higher number of inliers, less running time and robustness to noise. On the other hand, SIFT detector has recorded better performance on radar images. The Experiments were done on different optical and radar images from Rapid-Eye, Pléiades, and TerraSAR-X satellite data for the area of Berlin Brandenburg airport.
International Journal of Systems Biology and Biomedical Technologies (IJSBBT) | 2015
Abd El Rahman Shabayek; Olivier Morel; David Fofi
From insects in your garden to creatures in the sea, inspiration can be drawn from nature to design a whole new class of smart robotic devices. These smart machines may move like living creatures. They can be launched toward a specific target for a pre-defined task. Bio-inspiration is developing to meet the needs of many challenges particularly in machine vision. Some species in the animal kingdom like cephalopods, crustaceans and insects are distinguished with their visual capabilities which are strongly improved by means of polarization. This work surveys the most recent research in the area of bio-inspired polarization based robot orientation and navigation. Firstly, the authors will briefly discuss the polarization based orientation and navigation behavior in the animal kingdom. Secondly, a comprehensive cover of its mapping into robotics navigation and orientation estimation will be given. Finally, the future research directions will be discussed. Polarization-based Robot Orientation and Navigation: Progress and Insights
international conference on computer vision | 2009
Abd El Rahman Shabayek; David Fofi; Olivier Morel
Archive | 2012
Abd El Rahman Shabayek; Olivier Morel; David Fofi