Ashraf M. Abdelbar
Brandon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashraf M. Abdelbar.
Artificial Intelligence | 1998
Ashraf M. Abdelbar; Sandra Mitchell Hedetniemi
Abstract Finding maximum a posteriori (MAP) assignments, also called Most Probable Explanations, is an important problem on Bayesian belief networks. Shimony has shown that finding MAPs is NP-hard. In this paper, we show that approximating MAPs with a constant ratio bound is also NP-hard. In addition, we examine the complexity of two related problems which have been mentioned in the literature. We show that given the MAP for a belief network and evidence set, or the family of MAPs if the optimal is not unique, it remains NP-hard to find, or approximate, alternative next-best explanations. Furthermore, we show that given the MAP, or MAPs, for a belief network and an initial evidence set, it is also NP-hard to find, or approximate, the MAP assignment for the same belief network with a modified evidence set that differs from the initial set by the addition or removal of even a single node assignment. Finally, we show that our main result applies to networks with constrained in-degree and out-degree, applies to randomized approximation, and even still applies if the ratio bound, instead of being constant, is allowed to be a polynomial function of various aspects of the network topology.
Applied Soft Computing | 2013
Khalid M. Salama; Ashraf M. Abdelbar; Fernando E. B. Otero; Alex Alves Freitas
The cAnt-Miner algorithm is an Ant Colony Optimization (ACO) based technique for classification rule discovery in problem domains which include continuous attributes. In this paper, we propose several extensions to cAnt-Miner. The main extension is based on the use of multiple pheromone types, one for each class value to be predicted. In the proposed @mcAnt-Miner algorithm, an ant first selects a class value to be the consequent of a rule and the terms in the antecedent are selected based on the pheromone levels of the selected class value; pheromone update occurs on the corresponding pheromone type of the class value. The pre-selection of a class value also allows the use of more precise measures for the heuristic function and the dynamic discretization of continuous attributes, and further allows for the use of a rule quality measure that directly takes into account the confidence of the rule. Experimental results on 20 benchmark datasets show that our proposed extension improves classification accuracy to a statistically significant extent compared to cAnt-Miner, and has classification accuracy similar to the well-known Ripper and PART rule induction algorithms.
international symposium on neural networks | 1996
Ashraf M. Abdelbar; G. Tagliarini
A frequently voiced complaint regarding neural networks is that it is difficult to interpret the results of training in a meaningful way. The HONEST network is a new feedforward high order neural network (HONN) which not only allows a fuller degree of adaptability in the form of the nonlinear mapping than the sigma-pi model, but has a structure that can make it easier to understand how the network inputs come to be mapped into the network outputs. This structure also makes it easier to use external expert knowledge of the domain to examine the validity of the HONEST network solution and possible to reject some solutions. This is particularly important for embedded, failure-critical systems such as life-support systems. We have applied the HONEST network to the problem of forecasting the onset of diabetes using eight physiological measurements and genetic factors. We obtained a successful classification rate of 83% compared to a 76% rate that had been obtained by previous researchers.
Artificial Intelligence | 2004
Ashraf M. Abdelbar
Cost-based abduction (CBA) is an important problem in reasoning under uncertainty. Finding Least-Cost Proofs (LCPs) for CBA systems is known to be NP-hard and has been a subject of considerable research over the past decade. In this paper, we show that approximating LCPs, within a fixed ratio bound of the optimal solution, is NP-hard, even for quite restricted subclasses of CBAs. We also consider a related problem concerned with the fine-tuning of a CBAs cost function.
international symposium on neural networks | 2003
Ashraf M. Abdelbar; Emad A. M. Andrews; Donald C. Wunsch
Abduction is the process of proceeding from data describing a set of observations or events, to a set of hypotheses which best explains or accounts for the data. Cost-based abduction (CBA) is a formalism in which evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. In previous work, we presented a method for using high order recurrent networks to find least cost proofs for CBA instances. Here, we present a method that significantly reduces the size of the neural network that is produced for a given CBA instance. We present experimental results describing the performance of this method and comparing its performance to that of the previous method.
Swarm Intelligence | 2015
Khalid M. Salama; Ashraf M. Abdelbar
Ant colony optimization (ACO) has been successfully applied to classification, where the aim is to build a model that captures the relationships between the input attributes and the target class in a given domain’s dataset. The constructed classification model can then be used to predict the unknown class of a new pattern. While artificial neural networks are one of the most widely used models for pattern classification, their application is commonly restricted to fully connected three-layer topologies. In this paper, we present a new algorithm, ANN-Miner, which uses ACO to learn the structure of feed-forward neural networks. We report computational results on 40 benchmark datasets for several variations of the algorithm. Performance is compared to the standard three-layer structure trained with two different weight-learning algorithms (back propagation, and the
Neural Computing and Applications | 1998
Ashraf M. Abdelbar
international conference on swarm intelligence | 2014
Khalid M. Salama; Ashraf M. Abdelbar
\hbox {ACO}_{\mathbb {R}}
international conference on swarm intelligence | 2010
Khalid M. Salama; Ashraf M. Abdelbar
congress on evolutionary computation | 2003
Ashraf M. Abdelbar; M. Mokhtar
ACOR algorithm), and also to a greedy algorithm for learning NN structures. A nonparametric Friedman test is used to determine statistical significance. In addition, we compare our proposed algorithm with NEAT, a prominent evolutionary algorithm for evolving neural networks, as well as three different well-known state-of-the-art classifiers, namely the C4.5 decision tree induction algorithm, the Ripper classification rule induction algorithm, and support vector machines.