Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher K. Monson is active.

Publication


Featured researches published by Christopher K. Monson.


genetic and evolutionary computation conference | 2005

Exposing origin-seeking bias in PSO

Christopher K. Monson; Kevin D. Seppi

We discuss testing methods for exposing origin-seeking bias in PSO motion algorithms. The strategy of resizing the initialization space, proposed by Gehlhaar and Fogel and made popular in the PSO context by Angeline, is shown to be insufficiently general for revealing an algorithms tendency to focus its efforts on regions at or near the origin. An alternative testing method is proposed that reveals problems with PSO motion algorithms that are not visible when merely resizing the initialization space.


genetic and evolutionary computation conference | 2006

Adaptive diversity in PSO

Christopher K. Monson; Kevin D. Seppi

Spatial Extension PSO (SEPSO) and Attractive-Repulsive PSO (ARPSO) are methods for artificial injection of diversity into particle swarm optimizers that are intended to encourage converged swarms to engage in exploration. While simple to implement, effective when tuned correctly, and benefiting from intuitive appeal, SEPSO behavior can be improved by adapting its radius and bounce parameters in response to collisions. In fact, adaptation can allow SEPSO to compete with and outperform ARPSO. The adaptation strategies presented here are simple to implement, easy to tune, and retain SEPSOs intuitive appeal.


genetic and evolutionary computation conference | 2004

The Kalman Swarm

Christopher K. Monson; Kevin D. Seppi

Particle Swarm Optimization is gaining momentum as a simple and effective optimization technique. We present a new approach to PSO that significantly reduces the number of iterations required to reach good solutions. In contrast with much recent research, the focus of this work is on fundamental particle motion, making use of the Kalman Filter to update particle positions. This enhances exploration without hurting the ability to converge rapidly to good solutions.


genetic and evolutionary computation conference | 2007

MRPSO: MapReduce particle swarm optimization

Andrew W. McNabb; Christopher K. Monson; Kevin D. Seppi

In optimization problems involving large amounts of data, Particle Swarm Optimization (PSO) must be parallelized because individual function evaluations may take minutes or even hours. However, large-scale parallelization is difficult because programs must communicate efficiently, balance workloads and tolerate node failures. To address these issues, we present Map Reduce Particle Swarm Optimization(MRPSO), a PSO implementation based on Googles Map Reduce parallel programming model.


genetic and evolutionary computation conference | 2005

Bayesian optimization models for particle swarms

Christopher K. Monson; Kevin D. Seppi

We explore the use of information models as a guide for the development of single objective optimization algorithms, giving particular attention to the use of Bayesian models in a PSO context. The use of an explicit information model as the basis for particle motion provides tools for designing successful algorithms. One such algorithm is developed and shown empirically to be effective. Its relationship to other popular PSO algorithms is explored and arguments are presented that those algorithms may be developed from the same model, potentially providing new tools for their analysis and tuning.


congress on evolutionary computation | 2005

Linear equality constraints and homomorphous mappings in PSO

Christopher K. Monson; Kevin D. Seppi

We present a homomorphous mapping that converts problems with linear equality constraints into fully unconstrained and lower-dimensional problems for optimization with PSO. This approach, in contrast with feasibility preservation methods, allows any unconstrained optimization algorithm to be applied to a problem with linear equality constraints, making available tools that are known to be effective and simplifying the process of choosing an optimizer for these kinds of constrained problems. The application of some PSO algorithms to a problem that has undergone the mapping presented here is shown to be more effective and more consistent than other approaches to handling linear equality constraints in PSO


ieee international conference on evolutionary computation | 2006

Particle Swarm Optimization in Dynamic Pricing

Patrick Bowen Mullen; Christopher K. Monson; Kevin D. Seppi; Sean Warnick

Dynamic pricing is a real-time machine learning problem with scarce prior data and a concrete learning cost. While the Kalman Filter can be employed to track hidden demand parameters and extensions to it can facilitate exploration for faster learning, the exploratory nature of particle swarm optimization makes it a natural choice for the dynamic pricing problem. We compare both the Kalman Filter and existing particle swarm adaptations for dynamic and/or noisy environments with a novel approach that time-decays each particles previous best value; this new strategy provides more graceful and effective transitions between exploitation and exploration, a necessity in the dynamic and noisy environments inherent to the dynamic pricing problem.


international conference on machine learning and applications | 2004

Variable resolution discretization in the joint space

Christopher K. Monson; David Wingate; Kevin D. Seppi; Todd S. Peterson

We present JoSTLe, an algorithm that performs value iteration on control problems with continuous actions, allowing this useful reinforcement learning technique to be applied to problems where a priori action discretization is inadequate. The algorithm is an extension of a variable resolution technique that works for problems with continuous states and discrete actions [6]. Results are given that indicate that JoSTLe is a promising step toward reinforcement learning in a fully continuous domain.


electronic commerce | 2008

A graphical model for evolutionary optimization

Christopher K. Monson; Kevin D. Seppi

We present a statistical model of empirical optimization that admits the creation of algorithms with explicit and intuitively defined desiderata. Because No Free Lunch theorems dictate that no optimization algorithm can be considered more efficient than any other when considering all possible functions, the desired function class plays a prominent role in the model. In particular, this provides a direct way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions. Among the benefits of the model are the ability to specify the function class in a straightforward manner, a natural way to specify noisy or dynamic functions, and a new source of insight into No Free Lunch theorems for optimization.


genetic and evolutionary computation conference | 2004

The Kalman Swarm: A New Approach to Particle Motion in Swarm Optimization.

Christopher K. Monson; Kevin D. Seppi

Collaboration


Dive into the Christopher K. Monson's collaboration.

Top Co-Authors

Avatar

Kevin D. Seppi

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean Warnick

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge