Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Leyton-Brown is active.

Publication


Featured researches published by Kevin Leyton-Brown.


electronic commerce | 2001

Incentives for sharing in peer-to-peer networks

Philippe Golle; Kevin Leyton-Brown; Ilya Mironov

We consider the free-rider problem that arises in peer-to-peer file sharing networks such as Napster: the problem that individual users are provided with no incentive for adding value to the network. We examine the design implications of the assumption that users will selfishly act to maximize their own rewards, by constructing a formal game theoretic model of the system and analyzing equilibria of user strategies under several novel payment mechanisms. We support and extend upon our theoretical predictions with experimental results from a multi-agent reinforcement learning model.


learning and intelligent optimization | 2011

Sequential model-based optimization for general algorithm configuration

Frank Hutter; Holger H. Hoos; Kevin Leyton-Brown

State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach.


electronic commerce | 2000

Towards a universal test suite for combinatorial auction algorithms

Kevin Leyton-Brown; Mark Pearson; Yoav Shoham

General combinatorial auctions—auctions in which bidders place unrestricted bids for bundles of goods—are the subject of increasing study. Much of this work has focused on algorithms for finding an optimal or approximately optimal set of winning bids. Comparatively little attention has been paid to methodical evaluation and comparison of these algorithms. In particular, there has not been a systematic discussion of appropriate data sets that can serve as universally accepted and well motivated benchmarks. In this paper we present a suite of distribution families for generating realistic, economically motivated combinatorial bids in five broad real-world domains. We hope that this work will yield many comments, criticisms and extensions, bringing the community closer to a universal combinatorial auction test suite.


knowledge discovery and data mining | 2013

Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms

Chris Thornton; Frank Hutter; Holger H. Hoos; Kevin Leyton-Brown

Many different machine learning algorithms exist; taking into account each algorithms hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKAs standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.


principles and practice of constraint programming | 2002

Learning the Empirical Hardness of Optimization Problems: The Case of Combinatorial Auctions

Kevin Leyton-Brown; Eugene Nudelman; Yoav Shoham

We propose a new approach for understanding the algorithm-specific empiricalh ardness of NP-Hard problems. In this work we focus on the empirical hardness of the winner determination problem--an optimization problem arising in combinatorial auctions--when solved by ILOGs CPLEX software. We consider nine widely-used problem distributions and sample randomly from a continuum of parameter settings for each distribution. We identify a large number of distribution-nonspecific features of data instances and use statisticalregression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.


adaptive agents and multi-agents systems | 2004

Run the GAMUT: A Comprehensive Approach to Evaluating Game-Theoretic Algorithms

Eugene Nudelman; Jennifer Wortman; Yoav Shoham; Kevin Leyton-Brown

We present GAMUT^1, a suite of game generators designed for testing game-theoretic algorithms. We explain why such a generator is necessary, offer a way of visualizing relationships between the sets of games supported by GAMUT, and give an overview of GAMUTýs architecture. We highlight the importance of using comprehensive test data by benchmarking existing algorithms. We show surprisingly large variation in algorithm performance across different sets of games for two widely-studied problems: computing Nash equilibria and multiagent learning in repeated games.


Artificial Intelligence | 2014

Algorithm runtime prediction: Methods & evaluation

Frank Hutter; Lin Xu; Holger H. Hoos; Kevin Leyton-Brown

Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithms runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and -- perhaps most importantly -- a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.


principles and practice of constraint programming | 2004

Understanding random SAT: beyond the clauses-to-variables ratio

Eugene Nudelman; Kevin Leyton-Brown; Holger H. Hoos; Alex Devkar; Yoav Shoham

It is well known that the ratio of the number of clauses to the number of variables in a random k-SAT instance is highly correlated with the instances empirical hardness. We consider the problem of identifying such features of random SAT instances automatically using machine learning. We describe and analyze models for three SAT solvers - kcnfs, oksolver and satz - and for two different distributions of instances: uniform random 3-SAT with varying ratio of clauses-to-variables, and uniform random 3-SAT with fixed ratio of clauses-to-variables. We show that surprisingly accurate models can be built in all cases. Furthermore, we analyze these models to determine which features are most useful in predicting whether an instance will be hard to solve. Finally we discuss the use of our models to build SATzilla, an algorithm portfolio for SAT.


international joint conference on artificial intelligence | 2009

SATenstein: automatically building local search SAT solvers from components

Ashiqur R. KhudaBukhsh; Lin Xu; Holger H. Hoos; Kevin Leyton-Brown

Designing high-performance algorithms for computationally hard problems is a difficult and often time-consuming task. In this work, we demonstrate that this task can be automated in the context of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalised, highly parameterised solver framework, dubbed SATenstein, that includes components gleaned from or inspired by existing high-performance SLS algorithms for SAT. The parameters of SATenstein control the selection of components used in any specific instantiation and the behaviour of these components. SATenstein can be configured to instantiate a broad range of existing high-performance SLS-based SAT solvers, and also billions of novel algorithms. We used an automated algorithm configuration procedure to find instantiations of SATenstein that perform well on several well-known, challenging distributions of SAT instances. Overall, we consistently obtained significant improvements over the previously best-performing SLS algorithms, despite expending minimal manual effort.


Journal of the ACM | 2009

Empirical hardness models: Methodology and a case study on combinatorial auctions

Kevin Leyton-Brown; Eugene Nudelman; Yoav Shoham

Is it possible to predict how long an algorithm will take to solve a previously-unseen instance of an NP-complete problem? If so, what uses can be found for models that make such predictions? This article provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithms runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithms running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances.

Collaboration


Dive into the Kevin Leyton-Brown's collaboration.

Top Co-Authors

Avatar

Holger H. Hoos

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Xu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

James R. Wright

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David M. Thompson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moshe Tennenholtz

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge