Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed ElHelw is active.

Publication


Featured researches published by Mohamed ElHelw.


The Visual Computer | 2010

Interactive 3D visualization for wireless sensor networks

Reda ElHakim; Mohamed ElHelw

Wireless sensor networks open up a new realm of ubiquitous computing applications based on distributed large-scale data collection by embedded sensor nodes that are wirelessly connected and seamlessly integrated within the environment. 3D visualization of sensory data is a challenging issue, however, due to the large number of sensors used in typical deployments, continuous data streams, and constantly varying network topology. This paper describes a practical approach for interactive 3D visualization of wireless sensor network data. A regular 3D grid is reconstructed using scattered sensor data points and used to generate view-dependent 2D slices that are consequently rendered with commodity graphics hardware leading to smooth visualization over space and time. Furthermore, the use of efficient space partitioning data structures and the independent processing of sensor data points facilitates interactive rendering for large number of sensors while accommodating constantly changing network topology. The practical value of the proposed method is demonstrated and results obtained for visualizing time-varying temperature distributions in an urban area are presented.


wearable and implantable body sensor networks | 2010

Body and Visual Sensor Fusion for Motion Analysis in Ubiquitous Healthcare Systems

Mohamed ElSayed; Abubakr Alsebai; Ahmed Salaheldin; Neamat El Gayar; Mohamed ElHelw

Human motion analysis provides a valuable solution for monitoring the wellbeing of the elderly, quantifying post-operative patient recovery and monitoring the progression of neurodegenerative diseases such as Parkinson’s. The development of accurate motion analysis models, however, requires the integration of multi-sensing modalities and the utilization of appropriate data analysis techniques. This paper describes a robust framework for improved patient motion analysis by integrating information captured by body and visual sensor networks. Real-time target extraction is applied and a skeletonization procedure is subsequently carried out to quantify the internal motion of moving target and compute two metrics, spatiotemporal cyclic motion between leg segments and head trajectory, for each vision node. Extracted motion metrics from multiple vision nodes and accelerometer information from a wearable body sensor are then fused at the feature level by using K-Nearest Neighbor algorithm and used to classify target’s walking gait into normal or abnormal. The potential value of the proposed framework for patient monitoring is demonstrated and the results obtained from practical experiments are described.


international conference on e-health networking, applications and services | 2010

Ambient and wearable sensing for gait classification in pervasive healthcare environments

M. ElSayed; A. Alsebai; Ahmed Salaheldin; N. El Gayar; Mohamed ElHelw

Pervasive healthcare environments provide an effective solution for monitoring the wellbeing of the elderly where the general trend of an increasingly ageing population has placed significant burdens on current healthcare systems. An important pervasive healthcare system functionality is patient motion analysis where gait information can be used to detect walking behavior abnormalities that may indicate the onset of adverse health problems, for quantifying post-operative recovery, and to observe the progression of neurodegenerative diseases. The development of accurate motion analysis models, however, requires the integration of multi-sensing modalities and the utilization of appropriate data analysis techniques. This paper describes a simple and robust framework for improved patient motion analysis based on an ambient and a wearable sensor. Using visual information from a single vision sensor, target segmentation is first carried out and a skeleton extraction procedure is subsequently applied to quantify the target internal motion by computing two metrics, spatiotemporal cyclic motion between leg segments and head trajectory. Extracted accelerometer information from a wearable body sensor is fused with the extracted metrics at the feature level by using K-Nearest Neighbor algorithm to classify targets walking gait into normal or abnormal. The potential value of the proposed framework for patient monitoring is demonstrated and the results obtained from practical experiments are described.


international conference on ultra modern telecommunications | 2012

Remote prognosis, diagnosis and maintenance for automotive architecture based on least squares support vector machine and multiple classifiers

Mostafa Anwar Taie; Mohammed Diab; Mohamed ElHelw

Software issues related to automotive controls account for an increasingly large percentage of the overall vehicles recalled. To alleviate this problem, vehicle diagnosis and maintenance systems are increasingly being performed remotely, that is while the vehicle is being driven without need for factory recall and there is strong consumer interest in Remote Diagnosis and Maintenance (RD&M) systems. Such systems are developed with different building blocks/elements and various capabilities. This paper presents a novel automotive RD&M system and prognosis architecture. The elements of the proposed system include vehicles, smart phones, maintenance service centers, vehicle manufacturer, RD&M experts, RD&M service centers, logistics carry centers, and emergency centers. The system promotes the role of smart phones used to run prognosis and diagnosis tools based on Least Squares Support Vector Machine (LS-SVM) multiple classifiers. During the prognosis phase, the smart phone stores the history of any forecast failures and send them, only if any failure already occurred during the diagnosis, to the RD&M service centre. The later will then forward it to RD&M experts as a real failure data to improve the training data used in prognosis classification and predication of the remaining useful life (RUL). The (LS-SVM) is used widely in prognostics and system health management of spacecraft in-orbit and it is applied to monitor spacecrafts performance, detect faults, identify the root cause of the fault, and predict RUL. The same approach is applied in this paper. Finally, the RD&M software architectures for the vehicle and the smart phone are presented.


Robot | 2014

Real-Time Vehicle Detection and Tracking Using Haar-Like Features and Compressive Tracking

Sara Maher Elkerdawi; Ramy Sayed; Mohamed ElHelw

This paper presents a real-time vision framework that detects and tracks vehicles from stationary camera. It can be used to calculate statistical information such as average traffic speed and flow as well as in surveillance tasks. The framework consists of three main stages. Vehicles are first detected using Haar-like features. In the second phase, an adaptive appearance-based model is built to dynamically keep track of the detected vehicles. This model is also used in the third phase of data association to fuse the detection and tracking results. The use of detection results to update the tracker enhances the overall framework accuracy. The practical value of the proposed framework is demonstrated in real-life experiments where it is used to robustly compute vehicle counts within certain region of interest under variety of challenges.


ieee-embs conference on biomedical engineering and sciences | 2012

EEG spectral analysis for attention state assessment: Graphical versus classical classification techniques

Ahmed Fathy; Ahmed S. Fahmy; Mohamed ElHelw; Seif Eldawlatly

Advances in Brain-computer Interface (BCI) technology have opened the door to assisting millions of people worldwide with disabilities. In this work, we focus on assessing brain attention state that could be used to selectively run an application on a hand-held device. We examine different classification techniques to assess brain attention state. Spectral analysis of the recorded EEG activity was performed to compute the Alpha band power for different subjects during attentive and non-attentive tasks. The estimated power values were used to train a number of classical classifiers to discriminate among the two attention states. Results demonstrate a classification accuracy of 70% using both individual- and multi-channel data. We then utilize a graphical approach to assess the causal influence among EEG electrodes for each of the two attention states. The inferred graphical representations for each state were used as signatures for state classification. A classification accuracy of 83% was obtained using the graphical approach outperforming the examined classical classifiers.


ieee international electric vehicle conference | 2012

AiroDiag: A sophisticated tool that diagnoses and updates vehicles software over air

Karim Mansour; Wael Farag; Mohamed ElHelw

This paper introduces a novel method for diagnosing embedded systems and updating embedded software installed on the electronics control units of vehicles through the Internet using client and server units. It also presents the communication protocols between the vehicle and the manufacturer for instant fault diagnosis and software update while ensuring security for both parties. AiroDiag ensures maximum vehicle efficiency for the driver and provides the manufacturer with up-to-date vehicle performance data, allowing enhanced future software deployment and minimum loss in case of vehicle recalls.


international conference on signal and image processing applications | 2011

A semi supervised learning-based method for adaptive shadow detection

Mohamed M. El-Zahhar; Abubakrelsedik Karali; Mohamed ElHelw

In vision-based systems, cast shadow detection is one of the key problems that must be alleviated in order to achieve robust segmentation of moving objects. Most methods for shadow detection require significant human input and they work in static settings. This paper proposes a novel approach for adaptive shadow detection by using semi-supervised learning which is a technique that has been widely utilized in various pattern recognition applications and exploits the use of labeled and unlabeled data to improve classification. The approach can be summarized as follows: First, we extract color, texture, and gradient features that are useful for differentiating between moving objects and their shadows. Second, we use a semi-supervised learning approach for adaptive shadow detection. Experimental results obtained with benchmark video sequences demonstrate that the proposed technique improves both the shadow detection rate (classify shadow points as shadows) and the shadow discrimination rate (not to classify object points as shadows) under different scene conditions.


ifip wireless days | 2010

WASP: Wireless autonomous sensor prototype for Visual Sensor Networks

Adel Orfy; Agwad El-Sayed; Mohamed ElHelw

Visual Sensor Networks (VSNs) enable enhanced three-dimensional sensing of spaces and objects, and facilitate collaborative reasoning to open up a new realm of vision-based distributed smart applications including security/surveillance, healthcare delivery, traffic monitoring, just to name a few. However, such applications require sensor nodes that can efficiently process large volumes of visual information in-situ and exhibit intelligent behavior to support autonomous operation, scalability, and energy efficiency. This paper presents WASP, a vision sensor node prototype with high computational capabilities that satisfy key VSN application requirements. The node integrates hardware components comprising a generic processor, a wireless communication module, and a camera, with a software framework designed to carry out key visual information processing tasks including object detection, segmentation, and morphological operators for image enhancement. In addition, the framework implements classifiers for target recognition and scenario analysis.


robotics and biomimetics | 2014

Vision-based scale-adaptive vehicle detection and tracking for intelligent traffic monitoring

Sara Elkerdawy; Ahmed Salaheldin; Mohamed ElHelw

This paper presents a novel real-time scale adaptive visual tracking framework and its use in smart traffic monitoring where the framework robustly detects and tracks vehicles from a stationary camera. Existing visual tracking methods often employ semi-supervised appearance models where a set of samples are continuously extracted around the object to train a discriminant classifier between the vehicle and the background. While proving their advantage, many issues are still to be addressed. One is the tradeoff between high adaptability (prone to drift) and preserving original vehicle appearance (susceptible to tracking loss with appearance changes). Another issue is vehicle scale variations due to perspective camera effects which increase the potential for inaccurate classifier update and subsequently tracking drift. Still, scale adaptability received little attention in discriminant trackers. In this paper we propose a Scale Adaptive Object Tracking (SAOT) algorithm that adapts to scale and appearance changes. The algorithm is divided into three phases: (1) vehicle localization using a diverse ensemble of multiple random projections, (2) scale estimation, and (3) data association where detected and tracked vehicles are correlated. Experimental results demonstrate that our method provides robust tracking in case of significant scale variations and helps alleviate the tracker drift problem.

Collaboration


Dive into the Mohamed ElHelw's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge