Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sherif Hashem is active.

Publication


Featured researches published by Sherif Hashem.


Pattern Recognition | 2007

Self-generating prototypes for pattern classification

Hatem A. Fayed; Sherif Hashem; Amir F. Atiya

Prototype classifiers are a type of pattern classifiers, whereby a number of prototypes are designed for each class so as they act as representatives of the patterns of the class. Prototype classifiers are considered among the simplest and best performers in classification problems. However, they need careful positioning of prototypes to capture the distribution of each class region and/or to define the class boundaries. Standard methods, such as learning vector quantization (LVQ), are sensitive to the initial choice of the number and the locations of the prototypes and the learning rate. In this article, a new prototype classification method is proposed, namely self-generating prototypes (SGP). The main advantage of this method is that both the number of prototypes and their locations are learned from the training set without much human intervention. The proposed method is compared with other prototype classifiers such as LVQ, self-generating neural tree (SGNT) and K-nearest neighbor (K-NN) as well as Gaussian mixture model (GMM) classifiers. In our experiments, SGP achieved the best performance in many measures of performance, such as training speed, and test or classification speed. Concerning number of prototypes, and test classification accuracy, it was considerably better than the other methods, but about equal on average to the GMM classifiers. We also implemented the SGP method on the well-known STATLOG benchmark, and it beat all other 21 methods (prototype methods and non-prototype methods) in classification accuracy.


international symposium on neural networks | 1997

Algorithms for optimal linear combinations of neural networks

Sherif Hashem

Recently, several techniques have been developed for combining neural networks. Combining a number of trained neural networks may yield better model accuracy, without requiring extensive efforts in training the individual networks or optimizing their architecture. However, since the corresponding outputs of the combined networks approximate the same physical quantity (or quantities), the linear dependency (collinearity) among these outputs may affect the estimation of the optimal combination weights for combining the networks, resulting in a combined model which is inferior to the apparent best network. In this paper, we present two algorithms for selecting the component networks for the combination in order to reduce the ill effects of collinearity, thus improving the generalization ability of the combined model. Experimental results are included.


Connection Science | 1996

Effects of Collinearity on Combining Neural Networks

Sherif Hashem

Collinearity or linear dependency among a number of estimators may pose a serious problem when combining these estimators. The corresponding outputs of a number of neural networks NNs , which are trained to approximate the same quantity or quantities , may be highly correlated. Thus, the estimation of the optimal weights for combining such networks may be subjected to the harmful effects of collinearity, which results in a final model with inferior generalizations ability compared with the individual networks. In this paper, we investigate the harmful effects of collinearity on the estimation of the optimal weights for combining a number on NNs. We discuss an approach for selecting the component networks in order to improve the generalization ability of the combined model. Our experimental results demonstrate significant improvements in the generalization ability of a combined model as a result of the proper selection of the component networks. The approximation accuracy of the combined model is compared ...


international symposium on neural networks | 2007

Neural Network vs. Linear Models for Stock Market Sectors Forecasting

Ghada Abdelmouez; Sherif Hashem; Amir F. Atiya; Mohamed A. El-Gamal

The majority of work on forecasting the stock market has focused on individual stocks or stock indexes. In this study we consider the problem of forecasting stock sectors (or industries). We have found no study that considers this problem. Stock sectors are indexes that group several stocks covering a specific sector in the economy, for example the banking sector, the retail sector, etc. It is important for investment allocation purposes to know where each sector is going. In this study we apply linear models, such as Box-Jenkins methodology and multiple regression, as well as neural networks on the sector forecasting problem. As it turns out neural networks yielded the best forecasting performance.


international symposium on neural networks | 1999

A novel approach for training neural networks for long-term prediction

Sherif Hashem; Z. H. Ashour; E. F. Abdel Gawad; A. Abdel Hakeem

Neural networks have been widely used in performing time series prediction. Long-term prediction is generally far more difficult than short-term prediction, because of the difficulty in modeling the system dynamics far ahead. In this paper, we present a novel approach for training neural networks to perform long-term prediction. Our approach relies on the utilization of traditional time series analysis, based on Box-Jenkins methodology (1976), to: (1) determine the appropriate neural network architecture, (2) select the inputs to the neural network, and (3) determine the appropriate lead time for updating the connection-weights of the neural network during training. We demonstrate the effectiveness of this approach in producing accurate multistep ahead prediction on some real-world problems as well as on simulated time series data.


international conference on multimedia information networking and security | 2011

Computer Forensics Guidance Model with Cases Study

Sherif Hazem Noureldin; Sherif Hashem; Salma Abdalla

This work present brief reports on the summarization of the application of the previously published comprehensive digital forensics process model and the forensic teams responsibilities to two real world computer forensic cases. Moreover, the information flow between each step and each phase in the model is discussed and elaborated in the form of flow chart diagrams, which are then applied on the two real cases.


International Journal of Pattern Recognition and Artificial Intelligence | 2009

HYPERSPHERICAL PROTOTYPES FOR PATTERN CLASSIFICATION

Hatem A. Fayed; Amir F. Atiya; Sherif Hashem

The nearest neighbor method is one of the most widely used pattern classification methods. However its major drawback in practice is the curse of dimensionality. In this paper, we propose a new method to alleviate this problem significantly. In this method, we attempt to cover the training patterns of each class with a number of hyperspheres. The method attempts to design hyperspheres as compact as possible, and we pose this as a quadratic optimization problem. We performed several simulation experiments, and found that the proposed approach results in considerable speed-up over the k-nearest-neighbor method while maintaining the same level of accuray. It also significantly beats other prototype classification methods (Like LVQ, RCE and CCCD) in most performance aspects.


international conference on neural information processing | 2006

Pattern classification using a set of compact hyperspheres

Amir F. Atiya; Sherif Hashem; Hatem A. Fayed

Prototype classifiers are one of the simplest and most intuitive approaches in pattern classification. However, they need careful positioning of prototypes to capture the distribution of each class region. Classical methods, such as learning vector quantization (LVQ), are sensitive to the initial choice of the number and the locations of the prototypes. To alleviate this problem, a new method is proposed that represents each class region by a set of compact hyperspheres. The number of hyperspheres and their locations are determined by setting up the problem as a set of quadratic optimization problems. Experimental results show that the proposed approach significantly beats LVQ and Restricted Coulomb Energy (RCE) in most performance aspects.


international symposium on neural networks | 1999

Neural networks based chemical process models

Sherif Hashem; Anoop Mathur; Pariz Famouri

Efficient process design and online process control to within statistical limits play vital roles in quality improvement, and often offer a competitive edge in todays industry. We here investigate the use of artificial neural network (ANN) as a dynamic modeling tool. The ANN models are compared to traditional parametric regression models. The comparison covers various features offered by each modeling technique including model structure and accuracy measures.


Lecture Notes in Computer Science | 2006

Pattern Classification Using a Set of Compact Hyperspheres

Amir F. Atiya; Sherif Hashem; Hatem A. Fayed

Collaboration


Dive into the Sherif Hashem's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ghada Abdelmouez

German University in Cairo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge