John A. W. McCall
Robert Gordon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John A. W. McCall.
genetic and evolutionary computation conference | 2007
Ratiba Kabli; Frank Herrmann; John A. W. McCall
Bayesian Networks are today used in various fields and domains due to their inherent ability to deal with uncertainty. Learning Bayesian Networks, however is an NP-Hard task [7]. The super exponential growth of the number of possible networks given the number of factors in the studied problem domain has meant that more often, approximate and heuristic rather than exact methods are used. In this paper, a novel genetic algorithm approach for reducing the complexity of Bayesian network structure discovery is presented. We propose a method that uses chain structures as a model for Bayesian networks that can be constructed from given node orderings. The chain model is used to evolve a small number of orderings which are then injected into a greedy search phase which searches for an optimal structure. We present a series of experiments that show a significant reduction can be made in computational cost although with some penalty in success rate.
Evolutionary Computation | 2014
N. Al Moubayed; Andrei Petrovski; John A. W. McCall
This paper improves a recently developed multi-objective particle swarm optimizer () that incorporates dominance with decomposition used in the context of multi-objective optimization. Decomposition simplifies a multi-objective problem (MOP) by transforming it to a set of aggregation problems, whereas dominance plays a major role in building the leaders’ archive. introduces a new archiving technique that facilitates attaining better diversity and coverage in both objective and solution spaces. The improved method is evaluated on standard benchmarks including both constrained and unconstrained test problems, by comparing it with three state of the art multi-objective evolutionary algorithms: MOEA/D, OMOPSO, and dMOPSO. The comparison and analysis of the experimental results, supported by statistical tests, indicate that the proposed algorithm is highly competitive, efficient, and applicable to a wide range of multi-objective optimization problems.
international conference on evolutionary multi criterion optimization | 2001
Andrei Petrovski; John A. W. McCall
The main objectives of cancer treatment in general, and of cancer chemotherapy in particular, are to eradicate the tumour and to prolong the patient survival time. Traditionally, treatments are optimised with only one objective in mind. As a result of this, a particular patient may be treated in the wrong way if the decision about the most appropriate treatment objective was inadequate. To partially alleviate this problem, we show in this paper how the multi-objective approach to chemotherapy optimisation can be used. This approach provides the oncologist with versatile treatment strategies that can be applied in ambiguous cases. However, the conflicting nature of treatment objectives and the non-linearity of some of the constraints imposed on treatment schedules make it difficult to utilise traditional methods of multi-objective optimisation. Evolutionary Algorithms (EA), on the other hand, are often seen as the most suitable method for tackling the problems exhibiting such characteristics. Our present study proves this to be true and shows that EA are capable of finding solutions undetectable by other optimisation techniques.
parallel problem solving from nature | 2010
Noura Al Moubayed; Andrei Petrovski; John A. W. McCall
A novel Smart Multi-Objective Particle Swarm Optimisation method - SDMOPSO - is presented in the paper. The method uses the decomposition approach proposed in MOEA/D, whereby a multiobjective problem (MOP) is represented as several scalar aggregation problems. The scalar aggregation problems are viewed as particles in a swarm; each particle assigns weights to every optimisation objective. The problem is solved then as a Multi-Objective Particle Swarm Optimisation (MOPSO), in which every particle uses information from a set of defined neighbours. The paper also introduces a novel smart approach for sharing information between particles, whereby each particle calculates a new position in advance using its neighbourhood information and shares this new information with the swarm. The results of applying SDMOPSO on five standard MOPs show that SDMOPSO is highly competitive comparing with two state-of-the-art algorithms.
genetic and evolutionary computation conference | 2006
Andrei Petrovski; Siddhartha Shakya; John A. W. McCall
This paper presents a methodology for using heuristic search methods to optimise cancer chemotherapy. Specifically, two evolutionary algorithms - Population Based Incremental Learning (PBIL), which is an Estimation of Distribution Algorithm (EDA), and Genetic Algorithms (GAs) have been applied to the problem of finding effective chemotherapeutic treatments. To our knowledge, EDAs have been applied to fewer real world problems compared to GAs, and the aim of the present paper is to expand the application domain of this technique.We compare and analyse the performance of both algorithms and draw a conclusion as to which approach to cancer chemotherapy optimisation is more efficient and helpful in the decision-making activity led by the oncologists.
genetic and evolutionary computation conference | 2005
Siddhartha Shakya; John A. W. McCall; Deryck Forsyth Brown
This paper presents an empirical cost-benefit analysis of an algorithm called Distribution Estimation Using MRF with direct sampling (DEUMd). DEUMd belongs to the family of Estimation of Distribution Algorithm (EDA). Particularly it is a univariate EDA. DEUMd uses a computationally more expensive model to estimate the probability distribution than other univariate EDAs. We investigate the performance of DEUMd in a range of optimization problem. Our experiments shows a better performance (in terms of the number of fitness evaluation needed by the algorithm to find a solution and the quality of the solution) of DEUMd on most of the problems analysed in this paper in comparison to that of other univariate EDAs. We conclude that use of a Markov Network in a univariate EDA can be of net benefit in defined set of circumstances.
world congress on computational intelligence | 2008
Alexander E. I. Brownlee; John A. W. McCall; Qingfu Zhang; Deryck Forsyth Brown
Selection is one of the defining characteristics of an evolutionary algorithm, yet inherent in the selection process is the loss of some information from a population. Poor solutions may provide information about how to bias the search toward good solutions. Many estimation of distribution algorithms (EDAs) use truncation selection which discards all solutions below a certain fitness, thus losing this information. Our previous work on distribution estimation using Markov networks (DEUM) has described an EDA which constructs a model of the fitness function; a unique feature of this approach is that because selective pressure is built into the model itself selection becomes optional. This paper outlines a series of experiments which make use of this property to examine the effects of selection on the population. We look at the impact of selecting only highly fit solutions, only poor solutions, selecting a mixture of highly fit and poor solutions, and abandoning selection altogether. We show that in some circumstances, particularly where some information about the problem is already known, selection of the fittest only is suboptimal.
congress on evolutionary computation | 2010
Yanghui Wu; John A. W. McCall; David Corne
Learning Bayesian networks from data is an NP-hard problem with important practical applications. Several researchers have designed algorithms to overcome the computational complexity of this task. Difficult challenges remain however in reducing computation time for structure learning in networks of medium to large size and in understanding problem-dependent aspects of performance. In this paper, we present two novel algorithms (ChainACO and K2ACO) that use Ant Colony Optimization (ACO). Both algorithms search through the space of orderings of data variables. The ChainACO approach uses chain structures to reduce computational complexity of evaluation but at the expense of ignoring the richer structure that is explored in the K2ACO approach. The novel algorithms presented here are ACO versions of previously published GA approaches. We are therefore able to compare ACO vs GA algorithms and Chain vs K2 evaluations. We present a series of experiments on three well-known benchmark problems. Our results show problem-specific trade-offs between solution quality and computational effort. However it seems that the ACO-based approaches might be favored for larger problems, achieving better fitnesses and success rate than their GA counterparts on the largest network studied in our experiments.
ieee international conference on evolutionary computation | 2006
Siddhartha Shakya; John A. W. McCall; Deryck Forsyth Brown
Markov random field (MRF) modelling techniques have been recently proposed as a novel approach to probabilistic modelling for estimation of distribution algorithms (EDAs). An EDA using this technique was called distribution estimation using Markov random fields (DEUM). DEUM was later extended to DEUMd. DEUM and DEUMd use a univariate model of probability distribution, and have been shown to perform better than other univariate EDAs for a range of optimization problems. This paper extends DEUM to use a bivariate model and applies it to the Ising spin glass problems. We propose two variants of DEUM that use different sampling techniques. Our experimental result show a noticeable gain in performance.
parallel problem solving from nature | 2004
Andrei Petrovski; Bhavani Sudha; John A. W. McCall
Cancer chemotherapy is a complex treatment mode that requires balancing the benefits of treating tumours using anti-cancer drugs with the adverse toxic side-effects caused by these drugs. Some methods of computational optimisation, Genetic Algorithms in particular, have proven to be useful in helping to strike the right balance. The purpose of this paper is to study how an alternative optimisation method – Particle Swarm Optimisation – can be used to facilitate finding optimal chemotherapeutic treatments, and to compare its performance with that of Genetic Algorithms.