Mahmoud Kamel
King Abdulaziz University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mahmoud Kamel.
International Journal of Rough Sets and Data Analysis archive | 2015
Ahmed Radhwan; Mahmoud Kamel; Mohamed Yehia Dahab; Aboul Ella Hassanien
Accurate forecasting for future events constitutes a fascinating challenge for theoretical and for applied researches. Foreign Exchange market FOREX is selected in this research to represent an example of financial systems with a complex behavior. Forecasting a financial time series can be a very hard task due to the inherent uncertainty nature of these systems. It seems very difficult to tell whether a series is stochastic or deterministic chaotic or some combination of these states. More generally, the extent to which a non-linear deterministic process retains its properties when corrupted by noise is also unclear. The noise can affect a system in different ways even though the equations of the system remain deterministic. Since a single reliable statistical test for chaoticity is not available, combining multiple tests is a crucial aspect, especially when one is dealing with limited and noisy data sets like in economic and financial time series. In this research, the authors propose an improved model for forecasting exchange rates based on chaos theory that involves phase space reconstruction from the observed time series and the use of support vector regression SVR for forecasting.Given the exchange rates of a currency pair as scalar observations, observed time series is first analyzed to verify the existence of underlying nonlinear dynamics governing its evolution over time. Then, the time series is embedded into a higher dimensional phase space using embedding parameters.In the selection process to find the optimal embedding parameters,a novel method based on the Differential Evolution DE geneticalgorithmas a global optimization technique was applied. The authors have compared forecasting accuracy of the proposed model against the ordinary use of support vector regression. The experimental results demonstrate that the proposed method, which is based on chaos theory and genetic algorithm,is comparable with the existing approaches.
Biomedical Engineering Online | 2014
Mohammed J. Alhaddad; Mahmoud Kamel; Meena M. Makary; Hani Hargas; Yasser M. Kadah
BackgroundThe signals acquired in brain-computer interface (BCI) experiments usually involve several complicated sampling, artifact and noise conditions. This mandated the use of several strategies as preprocessing to allow the extraction of meaningful components of the measured signals to be passed along to further processing steps. In spite of the success present preprocessing methods have to improve the reliability of BCI, there is still room for further improvement to boost the performance even more.MethodsA new preprocessing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks is presented. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing preprocessing and allowing low channel counts to be used.ResultsThe new method is verified using experimental data and compared to the classification results of the same data without denoising and with denoising using present wavelet shrinkage based technique. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed.ConclusionThe new preprocessing method based on spectral subtraction denoising offer superior performance to existing methods and has potential for practical utility as a new standard preprocessing block in BCI signal processing.
Journal of Information & Knowledge Management | 2017
Hanan Al-Mofareji; Mahmoud Kamel; Mohamed Yehia Dahab
Organizing web information is an important aspect of finding information in the easiest and most efficient way. We present a new method for web document clustering called WeDoCWT, which exploits the discrete wavelet transform and term signal, to improve the document representation. We studied different methods for document segmentation to construct the term signals. We used two datasets, UW-CAN and WebKB, to evaluate the proposed method. The experimental results indicated that dividing the documents into fixed segments is preferable to dividing them into logical segments based on HTML features because the web pages do not have the same structure. Mean TF–IDF reduction technique gives the best results in most cases. WeDoCWT gives F-measure better than most of the previous approaches described in the literature. We used Munkres assignment algorithm to assign each produced cluster to the original class in order to evaluate the clustering results.
International Conference on Advanced Intelligent Systems and Informatics | 2016
Mohamed Yehia Dahab; Mahmoud Kamel; Sara Alnofaie
In most of the classical information retrieval models, documents are represented as bag-of words which takes into account the term frequencies (tf) and inverse document frequencies (idf) while they ignore the term proximity. Recently, term proximity among query terms has been observed to be beneficial for improving performance of document retrieval. Several applications of the retrieval have implemented tools to determine term proximity at the query formulation level. They rank documents based on the relative positions of the query terms within the documents. They must store all proximity data in the index, leading to a large index, which slows the search. Recently, many models use term signal representation to represent a query term, the query is transformed from the time domain to the frequency domain using transformation techniques such as wavelet. Discrete Wavelet Transform (DWT) uses multiple resolutions technique by which different frequencies are analyzed with different resolutions. The advantage of the DWT is to consider the spatial information of the query terms within the document rather than using only the count of terms. In this paper, in order to improve ranking score as well as improve the run-time efficiency to resolve the query, and maintain a reasonable space for the index, three different types of spectral analysis based on semantic segmentation are carried out namely: sentence-based segmentation, paragraph-based segmentation and fixed length segmentation; and also different term weighting is performed according to term position.
Archive | 2018
Mohamed Yehia Dahab; Sara Alnofaie; Mahmoud Kamel
Most successful information retrieval techniques which has the ability to expand the original query with additional terms that best represent the actual user need. This tutorial gives an overview of information retrieval models which are based on query expansion along with practical details and description on methods of implementation. Toy examples with data are provided to assist the reader to grasp the main idea behind the query expansion (QE) techniques such as Kullback-Leibler Divergence (KLD) and the candidate expansion terms based on WordNet. The tutorial uses spectral analysis which one of the recent information retrieval techniques that considers the term proximity.
Archive | 2018
Mohamed Yehia Dahab; Mahmoud Kamel; Sara Alnofaie
Most of the information retrieval models represent documents as bag-of words which takes into account the term frequencies (tf) and inverse document frequencies (idf). However, most of these models ignore the distance among query terms in the documents (i.e. term proximity). Several researches have appeared in recent years using the term proximity among query terms to increase the efficiency of document retrieval. To solve proximity problem, several researches have implemented tools to specify term proximity at the query formulation level. They rank documents based on the relative positions of the query terms within the documents. They should store all proximity data in the index, affecting on the size of index, which slows down the search process. In the last decade, many researches provided models that use term signal representation to represent a query term, the query is transformed from the time domain into the frequency domain using transformation techniques such as wavelet. Discrete Wavelet Transform (DWT), such as Haar and Daubechies, uses multiple resolutions technique by which different frequencies are analyzed with different resolutions. The advantage of the DWT is to consider the spatial information of the query terms within the document rather than using only the count of terms. In this chapter, in order to improve ranking score as well as improve the run-time efficiency to resolve the query, and maintain a reasonable space for the index, two different discrete wavelet transform algorithms have been applied namely: Haar and Daubechies, three different types of spectral analysis based on semantic segmentation are carried out namely: sentence-based segmentation, paragraph-based segmentation and fixed length segmentation; and also different term weighting is performed according to term position. The experiments were constructed using the Text REtrieval Conference (TREC) collection.
Biomedical Signal Processing and Control | 2017
Khaled Sayed; Mahmoud Kamel; Mohammed J. Alhaddad; Hussein Malibary; Yasser M. Kadah
Abstract A new processing framework that allows detailed characterization of the nonlinear dynamics of EEG signals at real-time rates is proposed. In this framework, the phase space trajectory is reconstructed and the underlying dynamics of the brain at different mental states are identified by analyzing the shape of this trajectory. Two sets of features based on affine-invariant moments and distance series transform allow robust estimation of the properties of the phase space trajectory while maintaining real-time performance. We describe the methodological details and practical implementation of the new framework and perform experimental verification using datasets from BCI competitions II and IV. The results showed excellent performance for using the new features as compared to competition winners and recent research on the same datasets providing best results in Graz2003 dataset and outperforming competition winner in 6 out of 9 subject in Graz2008 dataset. Furthermore, the computation times needed with the new methods were confirmed to permit real-time processing. The combination of more detailed description of the nonlinear dynamics of EEG and meeting online processing goals by the new methods offers great potential for several time-critical BCI applications such as prosthetic arm control or mental state monitoring for safety.
international conference on computational science | 2014
Mohammed J. Alhaddad; Mahmoud Kamel; Dalal M. Bakheet
A brain computer interface (BCI) is a novel communication system that translates brain signals into control commands. In this paper, we present a P300 BCI system based on ordinal pattern features. Compared to BCI system based on linear time domain features, we have shown that slightly better classification accuracies and bitrates can be achieved for healthy and disabled subjects.
advances in information technology | 2013
Mohammed J. Alhaddad; Mahmoud Kamel; Dalal M. Bakheet
various people suffering from disabilities prevent them from reciting the Quran, which was revealed to the Prophet Muhammad peace be upon him. Hence, new methods of interaction options for those people are required. Many research scrutinize various techniques, however, in this research a Brain Computer Interface (BCI) system based on P300 evoked potential for controlling simplified Quran player using the brain activity only is designed and implemented. This system examined at BCI laboratory in King Abdulaziz university hospital, and victorious result was achieved.
autonomous and intelligent systems | 2012
Mohammed J. Alhaddad; Mahmoud Kamel; Hussein Malibary; Khalid Thabit; Foud Dahlwi; Anas A. Hadi
P300 detection is known to be challenging task, as P300 potentials are buried in a large amount of noise. In standard recording of P300 signals, activity at the reference site affects measurements at all the active electrode sites. Analyses of P300 data would be improved if reference site activity could be separated out. This step is an important one before the extraction of P300 features. The essential goal is to improve the signal to noise ratio (SNR) significantly, i.e. to separate the task-related signal from the noise content, and therefore is likely to support the most accurate and rapid P300 Speller. Different techniques have been proposed to remove common sources of artifacts in raw EEG signals. In this research, twelve different techniques have been investigated along with their application for P300 speller in three different Datasets. The results as a whole demonstrate that common average reference CAR technique proved best able to distinguish between targets and non-targets. It was significantly superior to the other techniques.