Christopher D. Rosin
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher D. Rosin.
electronic commerce | 1997
Christopher D. Rosin; Richard K. Belew
We consider competitive coevolution, in which fitness is based on direct competition among individuals selected from two independently evolving populations of hosts and parasites. Competitive coevolution can lead to an arms race, in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. Competitive fitness sharing changes the way fitness is measured; shared sampling provides a method for selecting a strong, diverse set of parasites; and the hall of fame encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.
Annals of Mathematics and Artificial Intelligence | 2011
Christopher D. Rosin
A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by
International Conference Optimization in Computational Chemistry and Molecular Biology; Princeton, NJ; 05/07-09-1999 | 2000
Richard K. Belew; William E. Hart; Garrett M. Morris; Christopher D. Rosin
O\left(\sqrt{Kn\log(n)}\right)
conference on learning theory | 1996
Christopher D. Rosin; Richard K. Belew
. We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight Mi > 0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret
Artificial Life | 1998
Christopher D. Rosin; Richard K. Belew; Garrett M. Morris; Arthur J. Olson; David S. Goodsell
O\left(\frac{1}{M_{\ast}}\sqrt{n\log(n)}\right)
conference on learning theory | 1998
Christopher D. Rosin
where M ∗ is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB.
international conference on evolutionary computation | 1996
Christopher D. Rosin; Richard K. Belew
In this paper we evaluate the design of the hybrid EAs that are currently used to perform flexible ligand binding in the AutoDock docking software. Hybrid evolutionary algorithms (EAs) incorporate specialized operators that exploit domain-specific features to accelerate an EA’s search. We consider hybrid EAs that use an integrated local search operator to refine individuals within each iteration of the search. We evaluate several factors that impact the efficacy of a hybrid EA, and we propose new hybrid EAs that provide more robust convergence to low-energy docking configurations than the methods currently available in AutoDock.
Vision Research | 1993
Ernst Niebur; Christof Koch; Christopher D. Rosin
Machine learning of game strategies has often depended on competitive methods that continually develop new strategies capable of defeating previous ones. We use a very inclusive definition of game and consider a framework within which a competitive algorithm makes repeated use of a strategy learnang component that can learn strategies which defeat a given set of opponents. We describe game learning in terms of sets M and X of first and second player strategies, and connect the model with more familiar models of concept learning. We show the importance of the ideas of teaching set [9] and specification number [2] k in this new context. The performance of several competitive algorithms is investigated, using both worst-case and randomized strategy learning algorithms. Our central result (Theorem 4) is a competitive algorithm that solves games in a total number of strategies polynomial in Ig(/fil ), lg(/X l), and k. Its use is demonstrated, including an application in concept learning with a new kind of counterexample oracle. We conclude with a complexity analysis of game learning, and list a number of new questions arising from this work.
international conference on genetic algorithms | 1995
Christopher D. Rosin; Richard K. Belew
An understanding of antiviral drug resistance is important in the design of effective drugs. Comprehensive features of the interaction between drug designs and resistance mutations are difficult to study experimentally because of the very large numbers of drugs and mutants involved. We describe a computational framework for studying antiviral drug resistance. Data on HIV-1 protease are used to derive an approximate model that predicts interaction of a wide range of mutant forms of the protease with a broad class of protease inhibitors. An algorithm based on competitive coevolution is used to find highly resistant mutant forms of the protease, and effective inhibitors against such mutants, in the context of the model. We use this method to characterize general features of inhibitors that are effective in overcoming resistance, and to study related issues of selection pathways, cross-resistance, and combination therapies.
Archive | 1997
Christopher D. Rosin; Richard K. Belew
We consider the problem of searching a domain for points that have a desired property, in the special case where the objective function that determines the properties of points is unknown and must be learned during search. We give a parallel to PAC learning theory that is appropriate for reasoning about the sample complexity of this problem. The learner queries the true objective function at selected points, and uses this information to choose models of the objective function from a given hypothesis class that is known to contain a correct model. These models are used to focus the search on more promising areas of the domain. The goal is to find a point with the desired property in a small number of queries. We define an analog to VC dimension, needle dimension, to be the size of the largest sample in which any single point could have the desired property without the other points’ values revealing this information. We give an upper bound on sample complexity that is linear in needle dimension for a natural type of search protocol, and a linear lower bound for a class of constrained problems. We describe the relationship between needle dimension and VC dimension, and give several simple examples.