Holger H. Hoos
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Holger H. Hoos.
architectures for networking and communications systems | 2018
Nodir Kodirov; Sam Bayless; Fabian Ruffy; Ivan Beschastnikh; Holger H. Hoos; Alan J. Hu
Recent advances in network function virtualization have prompted the research community to consider data-center-scale deployments. However, existing tools, such as E2 and SOL, limit VNF chain allocation to rack-scale and provide limited support for management of allocated chains. We define a narrow API to let data center tenants and operators allocate and manage arbitrary VNF chain topologies, and we introduce NetPack, a new stochastic placement algorithm, to implement this API at data-center-scale. We prototyped the resulting system, dubbed Daisy, using the Sonata platform. In data-center-scale simulations on realistic scenarios and topologies that are orders of magnitude larger than prior work, we achieve in all cases an allocation density within 96% of a recently introduced, theoretically complete, constraint-solver-based placement engine, while being 82x faster on average. In detailed emulation with real packet traces, we find that Daisy performs each of our six API calls with at most one second of throughput drop.
international joint conference on artificial intelligence | 2017
Sam Bayless; Nodir Kodirov; Ivan Beschastnikh; Holger H. Hoos; Alan J. Hu
Article history: Received 14 December 2017 Received in revised form 19 October 2019 Accepted 25 October 2019 Available online 31 October 2019
parallel problem solving from nature | 2018
Yasha Pushak; Holger H. Hoos
Automated algorithm configuration procedures make use of powerful meta-heuristics to determine parameter settings that often substantially improve the performance of highly heuristic, state-of-the-art algorithms for prominent (mathcal {NP})-hard problems, such as the TSP, SAT and mixed integer programming (MIP). These meta-heuristics were originally designed for combinatorial optimization problems with vast and challenging search landscapes. Their use in automated algorithm configuration implies that algorithm configuration landscapes are assumed to be similarly complex; however, to the best of our knowledge no work has been done to support or reject this hypothesis. We address this gap by investigating the response of varying individual numerical parameters while fixing the remaining parameters at optimized values. We present evidence that most parameters exhibit uni-modal and often even convex responses, indicating that algorithm configuration landscapes are likely much more benign than previously believed.
architectures for networking and communications systems | 2018
Nodir Kodirov; Sam Bayless; Fabian Ruffy; Ivan Beschastnikh; Holger H. Hoos; Alan J. Hu
We propose VNF chain abstraction to decouple a tenants view of the VNF chain from the cloud providers implementation. We motivate the benefits of such an abstraction for the cloud provider as well as the tenants, and outline the challenges a cloud provider needs to address to make the chain abstraction practical. We describe the design requirements and report on our initial prototype.
Machine Learning | 2018
Katharina Eggensperger; Marius Lindauer; Holger H. Hoos; Frank Hutter; Kevin Leyton-Brown
The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. However, the proper evaluation of new algorithm configuration (AC) procedures (or configurators) is hindered by two key hurdles. First, AC scenarios are hard to set up, including the target algorithm to be optimized and the problem instances to be solved. Second, and even more significantly, they are computationally expensive: a single configurator run involves many costly runs of the target algorithm. Here, we propose a benchmarking approach that uses surrogate scenarios, which are computationally cheap while remaining close to the original AC scenarios. These surrogate scenarios approximate the response surface corresponding to true target algorithm performance using a regression model. In our experiments, we construct and evaluate surrogate scenarios for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems. We generalize previous work by building surrogates for AC scenarios with multiple problem instances, stochastic target algorithms and censored running time observations. We show that our surrogate scenarios capture overall important characteristics of the original AC scenarios from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate.
Handbook of Parallel Constraint Reasoning | 2018
Marius Lindauer; Holger H. Hoos; Frank Hutter; Kevin Leyton-Brown
In recent years the availability of parallel computation resources has grown rapidly. Nevertheless, even for the most widely studied constraint programming problems such as SAT, solver development and applications remain largely focussed on sequential rather than parallel approaches. To ease the burden usually associated with designing, implementing and testing parallel solvers, in this chapter, we demonstrate how methods from automatic algorithm design can be used to construct effective parallel portfolio solvers from sequential components. Specifically, we discuss two prominent approaches for this problem. (I) Parallel portfolio selection involves selecting a parallel portfolio consisting of complementary sequential solvers for a specific instance to be solved (as characterised by cheaply computable instance features). Applied to a broad set of sequential SAT solvers from SAT competitions, we show that our generic approach achieves nearly linear speedup on application instances, and super-linear speedups on combinatorial and random instances. (II) Automatic construction of parallel portfolios via algorithm configuration involves a parallel portfolio of algorithm parameter configurations that is optimized for a given set of instances. Applied to gold-medal-winning parameterized SAT solvers, we show that our approach can produce significantly better-performing SAT solvers than state-of-the-art parallel solvers constructed by human experts, reducing time-outs by 17% and running time (PAR10 score) by 13% under competition conditions.
Open Algorithm Selection Challenge 2017 | 2017
Chris Cameron; Holger H. Hoos; Kevin Leyton-Brown; Frank Hutter
Archive | 2006
Frank Hutter; Youssef Hamadi; Holger H. Hoos; Kevin Leyton-Brown
Archive | 2005
James King; Warren Cheung; Holger H. Hoos
Ai Communications | 2018
Andrea F. Bocchese; Chris Fawcett; Mauro Vallati; Alfonso Gerevini; Holger H. Hoos