Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Montgomery is active.

Publication


Featured researches published by James Montgomery.


congress on evolutionary computation | 2010

An analysis of the operation of differential evolution at high and low crossover rates

James Montgomery; Stephen Chen

A key parameter affecting the operation of differential evolution (DE) is the crossover rate Cr ∊ [0, 1]. While very low values are recommended for and used with separable problems, on non-separable problems, which include most real-world problems, Cr = 0.9 has become the de facto standard, working well across a large range of problem domains. Recent work on separable and non-separable problems has shown that lower-dimensional searches can play an important role in the performance of search techniques in higher-dimensional search spaces. However, the standard value of Cr = 0.9 implies a very high-dimensional search, which is not effective for other search techniques. An analysis of Cr across its range [0, 1] provides insight into how its value affects the performance of DE and suggests how low values may be used to improve the performance of DE. This new understanding of the operation of DE at high and low crossover rates is useful for analysing how adaptive parameters affect DE performance and leads to new suggestions for how adaptive DE techniques might be developed.


International Journal of Computational Intelligence and Applications | 2003

The accumulated experience ant colony for the travelling salesman problem

James Montgomery; Marcus Randall

Ant colony optimization techniques are usually guided by pheromone and heuristic cost information when choosing the next element to add to a solution. However, while an individual element may be attractive, usually its long term consequences are neither known nor considered. For instance, a short link in a traveling salesman problem may be incorporated into an ants solution, yet, as a consequence of this link, the rest of the path may be longer than if another link was chosen. The Accumulated Experience Ant Colony uses the previous experiences of the colony to guide in the choice of elements. This is in addition to the normal pheromone and heuristic costs. Two versions of the algorithm are presented, the original and an improved AEAC that makes greater use of accumulated experience. The results indicate that the original algorithm finds improved solutions on problems with less than 100 cities, while the improved algorithm finds better solutions on larger problems.


Applied Intelligence | 2015

Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution

Stephen Chen; James Montgomery; Antonio Bolufé-Röhler

The existence of the curse of dimensionality is well known, and its general effects are well acknowledged. However, and perhaps due to this colloquial understanding, specific measurements on the curse of dimensionality and its effects are not as extensive. In continuous domains, the volume of the search space grows exponentially with dimensionality. Conversely, the number of function evaluations budgeted to explore this search space usually grows only linearly. The divergence of these growth rates has important effects on the parameters used in particle swarm optimization and differential evolution as dimensionality increases. New experiments focus on the effects of population size and key changes to the search characteristics of these popular metaheuristics when population size is less than the dimensionality of the search space. Results show how design guidelines developed for low-dimensional implementations can become unsuitable for high-dimensional search spaces.


genetic and evolutionary computation conference | 2011

A simple strategy to maintain diversity and reduce crowding in particle swarm optimization

Stephen Chen; James Montgomery

Each particle of a swarm maintains its current location and its personal best location. It is useful to think of these personal best locations as a population of attractors. When this population of attractors converges, the explorative capacity of the swarm is reduced. The convergence of attractors can occur quickly since the personal best of a particle is broadcast to its neighbours. If a neighbouring particle comes close to this broadcasting attractor, it may update its own personal best to be near the broadcasting attractor. This convergence of attractors can be reduced by having particles update the broadcasting attractor rather than their own attractor/personal best. Through this simple change which incurs minimal computational costs, large performance improvements can be achieved in multi-modal search spaces.


congress on evolutionary computation | 2009

Differential evolution: Difference vectors and movement in solution space

James Montgomery

In the commonly used DE/rand/1 variant of differential evolution the primary mechanism of generating new solutions is the perturbation of a randomly selected point by a difference vector. The newly selected point may, if good enough, then replace a solution from the current generation. As the magnitude of difference vectors diminishes as the population converges, the size of moves made also diminishes, an oft-touted and obvious benefit of the approach. Additionally, when the population splits into separate clusters difference vectors exist for both small and large moves. Given that a replaced solution is not the one perturbed to create the new, candidate solution, are the large difference vectors responsible for movement of population members between clusters? This paper examines the mechanisms of small and large moves, finding that small moves within one cluster result in solutions from another being replaced and so appearing to move a large distance. As clusters tighten this is the only mechanism for movement between them.


Artificial Life | 2005

Automated Selection of Appropriate Pheromone Representations in Ant Colony Optimization

James Montgomery; Marcus Randall; Tim Hendtlass

Ant colony optimization (ACO) is a constructive metaheuristic that uses an analogue of ant trail pheromones to learn about good features of solutions. Critically, the pheromone representation for a particular problem is usually chosen intuitively rather than by following any systematic process. In some representations, distinct solutions appear multiple times, increasing the effective size of the search space and potentially misleading ants as to the true learned value of those solutions. In this article, we present a novel system for automatically generating appropriate pheromone representations, based on the characteristics of the problem model that ensures unique pheromone representation of solutions. This is the first stage in the development of a generalized ACO system that could be applied to a wide range of problems with little or no modification. However, the system we propose may be used in the development of any problem-specific ACO algorithm.


Lecture Notes in Computer Science | 2002

Candidate Set Strategies for Ant Colony Optimisation

Marcus Randall; James Montgomery

Ant Colony Optimisation based solvers systematically scan the set of possible solution elements before choosing a particular one. Hence, the computational time required for each step of the algorithm can be large. One way to overcome this is to limit the number of element choices to a sensible subset, or candidate set. This paper describes some novel generic candidate set strategies and tests these on the travelling salesman and car sequencing problems. The results show that the use of candidate sets helps to find competitive solutions to the test problems in a relatively short amount of time.


congress on evolutionary computation | 2010

Crossover and the different faces of differential evolution searches

James Montgomery

Common explanations of DEs search behaviour as its crossover rate Cr is varied focus on the directionality of the search, as low values make moves aligned with a small number of axes while high values search at angles to the axes. While the direction of search is important, an analysis of moves generated by mutating differing numbers of dimensions suggests that the probability of making a successful move is more strongly related to the moves magnitude than to the number of dimensions in which it occurs. Low Cr moves are generally much smaller than those generated with high values, and more likely to succeed, but moves in many dimensions can produce greater improvements in solution quality. Although DE behaves differently at low and high Cr, both extremes can produce effective searches. Results suggest this is because low Cr searches make frequent, small improvements to all population members while high Cr searches produce less frequent, large improvements, followed by contraction of the population and a resultant reduction in move size. The interaction of F and population size with these different modes of search is investigated and recommendations made to achieve good results with both.


congress on evolutionary computation | 2013

Particle swarm optimization with thresheld convergence

Stephen Chen; James Montgomery

Many heuristic search techniques have concurrent processes of exploration and exploitation. In particle swarm optimization, an improved pbest position can represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). The latter can interfere with the former since the identification of a new more promising region depends on finding a (random) solution in that region which is better than the current pbest. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum - finding the best region to exploit then becomes the problem of finding the best random solution. However, a locally optimized solution from a poor region of the search space can be better than a random solution from a good region of the search space. Since exploitation can interfere with subsequent/concurrent exploration, it should be prevented during the early stages of the search process. In thresheld convergence, early exploitation is “held” back by a threshold function. Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces.


congress on evolutionary computation | 2013

Differential evolution with thresheld convergence

Antonio Bolufé-Röhler; Suilan Estévez-Velarde; Alejandro Piad-Morffis; Stephen Chen; James Montgomery

During the search process of differential evolution (DE), each new solution may represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). This concurrent exploitation can interfere with exploration since the identification of a new more promising region depends on finding a (random) solution in that region which is better than its target solution. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum - finding the best region to exploit then becomes the problem of finding the best random solution. However, differential evolution is characterized by an initial period of exploration followed by rapid convergence. Once the population starts converging, the difference vectors become shorter, more exploitation is performed, and an accelerating convergence occurs. This rapid convergence can occur well before the algorithms budget of function evaluations is exhausted; that is, the algorithm can converge prematurely. In thresheld convergence, early exploitation is “held” back by a threshold function, allowing a longer exploration phase. This paper presents a new adaptive thresheld convergence mechanism which helps DE achieve large performance improvements in multi-modal search spaces.

Collaboration


Dive into the James Montgomery's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Hendtlass

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Moser

Swinburne University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge