Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew W. McNabb is active.

Publication


Featured researches published by Andrew W. McNabb.


congress on evolutionary computation | 2007

Parallel PSO using MapReduce

Andrew W. McNabb; Christopher K. Monson; Kevin D. Seppi

In optimization problems involving large amounts of data, such as web content, commercial transaction information, or bioinformatics data, individual function evaluations may take minutes or even hours. particle swarm optimization (PSO) must be parallelized for such functions. However, large-scale parallel programs must communicate efficiently, balance work across all processors, and address problems such as failed nodes. We present mapreduce particle swarm optimization (MRPSO), a PSO implementation based on the mapreduce parallel programming model. We describe MapReduce and show how PSO can be naturally expressed in this model, without explicitly addressing any of the details of parallelization. We present a benchmark function for evaluating MRPSO and note that MRPSO is not appropriate for optimizing easily evaluated functions. We demonstrate that MRPSO scales to 256 processors on moderately difficult problems and tolerates node failures.


congress on evolutionary computation | 2011

Solving virtual machine packing with a Reordering Grouping Genetic Algorithm

David Wilcox; Andrew W. McNabb; Kevin D. Seppi

We formally define multi-capacity bin packing, a generalization of conventional bin packing, and develop an algorithm called Reordering Grouping Genetic Algorithm (RGGA) to assign VMs to servers. We first test RGGA on conventional bin packing problems and show that it yields excellent results but much more efficiently. We then generate a multi-constraint test set, and demonstrate the effectiveness of RGGA in this context. Lastly, we show the applicability of RGGA in its desired context by using it to develop an assignment of real virtual machines to servers.


genetic and evolutionary computation conference | 2007

MRPSO: MapReduce particle swarm optimization

Andrew W. McNabb; Christopher K. Monson; Kevin D. Seppi

In optimization problems involving large amounts of data, Particle Swarm Optimization (PSO) must be parallelized because individual function evaluations may take minutes or even hours. However, large-scale parallelization is difficult because programs must communicate efficiently, balance workloads and tolerate node failures. To address these issues, we present Map Reduce Particle Swarm Optimization(MRPSO), a PSO implementation based on Googles Map Reduce parallel programming model.


congress on evolutionary computation | 2009

An exploration of topologies and communication in large particle swarms

Andrew W. McNabb; Matthew Gardner; Kevin D. Seppi

Particle Swarm Optimization (PSO) has typically been used with small swarms of about 50 particles. However, PSO is more efficiently parallelized with large swarms. We formally describe existing topologies and identify variations which are better suited to large swarms in both sequential and parallel computing environments. We examine the performance of PSO for benchmark functions with respect to swarm size and topology. We develop and demonstrate a new PSO variant which leverages the unique strengths of large swarms. “Hearsay PSO” allows for information to flow quickly through the swarm, even with very loosely connected topologies. These loosely connected topologies are well suited to large scale parallel computing environments because they require very little communication between particles. We consider the case where function evaluations are expensive with respect to communication as well as the case where function evaluations are relatively inexpensive. We also consider a situation where local communication is inexpensive compared to external communication, such as multicore systems in a cluster.


Swarm Intelligence | 2012

A speculative approach to parallelization in particle swarm optimization

Matthew Gardner; Andrew W. McNabb; Kevin D. Seppi

Particle swarm optimization (PSO) has previously been parallelized primarily by distributing the computation corresponding to particles across multiple processors. In these approaches, the only benefit of additional processors is an increased swarm size. However, in many cases this is not efficient when scaled to very large swarm sizes (on very large clusters). Current methods cannot answer well the question: “How can 1000 processors be fully utilized when 50 or 100 particles is the most efficient swarm size?” In this paper we attempt to answer that question with a speculative approach to the parallelization of PSO that we refer to as SEPSO.In our approach, we refactor PSO such that the computation needed for iteration t+1 can be done concurrently with the computation needed for iteration t. Thus we can perform two iterations of PSO at once. Even with some amount of wasted computation, we show that this approach to parallelization in PSO often outperforms the standard parallelization of simply adding particles to the swarm. SEPSO produces results that are exactly equivalent to PSO; that is, SEPSO is a new method of parallelization and not a new PSO algorithm or variant.However, given this new parallelization model, we can relax the requirement of exactly reproducing PSO in an attempt to produce better results. We present several such relaxations, including keeping the best speculative position evaluated instead of the one corresponding to the standard behavior of PSO, and speculating several iterations ahead instead of just one. We show that these methods dramatically improve the performance of parallel PSO in many cases, giving speed ups of up to six compared to previous parallelization techniques.


parallel problem solving from nature | 2012

The apiary topology: emergent behavior in communities of particle swarms

Andrew W. McNabb; Kevin D. Seppi

In the natural world there are many swarms in any geographical region. In contrast, Particle Swarm Optimization (PSO) is usually used with a single swarm of particles. We define a simple new topology called Apiary and show that parallel communities of swarms give rise to emergent behavior that is fundamentally different from the behavior of a single swarm of identical total size. Furthermore, we show that subswarms are essential for scaling parallel PSO to more processors with computationally inexpensive objective functions. Surprisingly, subswarms are also beneficial for scaling PSO to high dimensional problems, even in single processor environments.


ieee international conference on high performance computing data and analytics | 2012

Mrs: MapReduce for Scientific Computing in Python

Andrew W. McNabb; Jeffrey Lund; Kevin D. Seppi

The MapReduce parallel programming model is designed for large-scale data processing, but its benefits, such as fault tolerance and automatic message routing, are also helpful for computationally-intensive algorithms. However, popular MapReduce frameworks such as Hadoop are slow for many scientific applications and are inconvenient on supercomputers and clusters which are common in research institutions. Mrs is a Python-based MapReduce framework that is well suited for scientific computing. We present comparisons of programs and run scripts to argue that Mrs is more convenient than Hadoop, the most popular MapReduce implementation. We also demonstrate that Mrs outperforms Hadoop for several types of problems that are relevant to scientific computing. In particular, Mrs demonstrates per-iteration overhead of about 0.3 seconds for Particle Swarm Optimization, while Hadoop takes at least 30 seconds for each MapReduce operation, a difference of two orders of magnitude.


parallel problem solving from nature | 2010

Speculative evaluation in particle swarm optimization

Matthew Gardner; Andrew W. McNabb; Kevin D. Seppi

Particle swarm optimization (PSO) has previously been parallelized only by adding more particles to the swarm or by parallelizing the evaluation of the objective function. However, some functions are more efficiently optimized with more iterations and fewer particles. Accordingly, we take inspiration from speculative execution performed in modern processors and propose speculative evaluation in PSO (SEPSO). Future positions of the particles are speculated and evaluated in parallel with current positions, performing two iterations of PSO at once. We also propose another way of making use of these speculative particles, keeping the best position found instead of the position that PSO actually would have taken.We show that for a number of functions, speculative evaluation gives dramatic improvements over adding additional particles to the swarm.


congress on evolutionary computation | 2014

Serial PSO results are irrelevant in a multi-core parallel world

Andrew W. McNabb; Kevin D. Seppi

From multi-core processors to parallel GPUs to computing clusters, computing resources are increasingly parallel. These parallel resources are being used to address increasingly challenging applications. This presents an opportunity to design optimization algorithms that use parallel processors efficiently. In spite of the intuitively parallel nature of Particle Swarm Optimization (PSO), most PSO variants are not evaluated from a parallel perspective and introduce extra communication and bottlenecks that are inefficient in a parallel environment. We argue that the standard practice of evaluating a PSO variant by reporting function values with respect to the number of function evaluations is inadequate for evaluating PSO in a parallel environment. Evaluating the parallel performance of a PSO variant instead requires reporting function values with respect to the number of iterations to show how the algorithm scales with the number of processors, along with an implementation-independent description of task interactions and communication. Furthermore, it is important to acknowledge the dependence of performance on specific properties of the objective function and computational resources. We discuss parallel evaluation of PSO, and we review approaches for increasing concurrency and for reducing communication which should be considered when discussing the scalability of a PSO variant. This discussion is essential both for designers who are defending the performance of an algorithm and for practitioners who are determining how to apply PSO for a given objective function and parallel environment.


international conference on cloud computing | 2010

Probabilistic Virtual Machine Assignment

David Wilcox; Andrew W. McNabb; Kevin D. Seppi; Kelly Flanagan

Collaboration


Dive into the Andrew W. McNabb's collaboration.

Top Co-Authors

Avatar

Kevin D. Seppi

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey Lund

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Chace Ashcraft

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge