Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Horn is active.

Publication


Featured researches published by Daniel Horn.


international conference on evolutionary multi-criterion optimization | 2015

Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark

Daniel Horn; Tobias Wagner; Dirk Biermann; Claus Weihs; Bernd Bischl

Within the last 10 years, many model-based multi-objective optimization algorithms have been proposed. In this paper, a taxonomy of these algorithms is derived. It is shown which contributions were made to which phase of the MBMO process. A special attention is given to the proposal of a set of points for parallel evaluation within a batch. Proposals for four different MBMO algorithms are presented and compared to their sequential variants within a comprehensive benchmark. In particular for the classic ParEGO algorithm, significant improvements are obtained. The implementations of all algorithm variants are organized according to the taxonomy and are shared in the open-source R package mlrMBO.


ieee symposium series on computational intelligence | 2016

Multi-objective parameter configuration of machine learning algorithms using model-based optimization

Daniel Horn; Bernd Bischl

The performance of many machine learning algorithms heavily depends on the setting of their respective hyperparameters. Many different tuning approaches exist, from simple grid or random search approaches to evolutionary algorithms and Bayesian optimization. Often, these algorithms are used to optimize a single performance criterion. But in practical applications, a single criterion may not be sufficient to adequately characterize the behavior of the machine learning method under consideration and the Pareto front of multiple criteria has to be considered. We propose to use model-based multi-objective optimization to efficiently approximate such Pareto fronts.


Advanced Data Analysis and Classification | 2016

A comparative study on large scale kernelized support vector machines

Daniel Horn; Aydın Demircioğlu; Bernd Bischl; Tobias Glasmachers; Claus Weihs

Kernelized support vector machines (SVMs) belong to the most widely used classification methods. However, in contrast to linear SVMs, the computation time required to train such a machine becomes a bottleneck when facing large data sets. In order to mitigate this shortcoming of kernel SVMs, many approximate training algorithms were developed. While most of these methods claim to be much faster than the state-of-the-art solver LIBSVM, a thorough comparative study is missing. We aim to fill this gap. We choose several well-known approximate SVM solvers and compare their performance on a number of large benchmark data sets. Our focus is to analyze the trade-off between prediction error and runtime for different learning and accuracy parameter settings. This includes simple subsampling of the data, the poor-man’s approach to handling large scale problems. We employ model-based multi-objective optimization, which allows us to tune the parameters of learning machine and solver over the full range of accuracy/runtime trade-offs. We analyze (differences between) solvers by studying and comparing the Pareto fronts formed by the two objectives classification error and training time. Unsurprisingly, given more runtime most solvers are able to find more accurate solutions, i.e., achieve a higher prediction accuracy. It turns out that LIBSVM with subsampling of the data is a strong baseline. Some solvers systematically outperform others, which allows us to give concrete recommendations of when to use which solver.


Archives of Data Science, Series A (Online First) | 2017

Multi-objective selection of algorithm portfolios

Daniel Horn; Bernd Bischl; Aydın Demircioğlu; Tobias Glasmachers; Tobias Wagner; Claus Weihs

We propose a method for selecting a portfolio of algorithms optimizing multiple criteria. We select a portfolio of limited size and at the same time good quality from a possibly large pool of algorithms. Our method also helps to decide which algorithm to use for each trade-off between conflicting objectives. Many algorithms depend on a number of parameters and, therefore, require problem-specific tuning for a suitable performance. In multi-objective tuning, different parameter settings of one algorithm will lead to different trade-offs between the conflicting objectives. Hence, discrete approximations of the corresponding Pareto front resulting from different parameter settings of each algorithm must be compared. Our technique is applied post-hoc to these approximations. It discards algorithms that contribute only insignificantly to the overall Pareto front and delivers simple and interpretable decision rules which algorithm to choose based on the desired trade-off. The new method is applied to the selection of approximative support vector machine solvers, where the objectives are high accuracy and short training time. The analysis hints at dropping several solvers completely and yields insights into the specific strengths of the remaining solvers.


ECDA | 2016

Big Data Classification: Aspects on Many Features and Many Observations

Claus Weihs; Daniel Horn; Bernd Bischl

In this paper we discuss the performance of classical classification methods on Big Data. We distinguish the cases many features and many observations. For the many features case we look at projection methods, distance-based methods, and feature selection. For the many observations case we mainly consider subsampling. The examples in this paper show that standard classification methods should not be blindly applied to Big Data.


parallel problem solving from nature | 2018

A First Analysis of Kernels for Kriging-Based Optimization in Hierarchical Search Spaces

Martin Zaefferer; Daniel Horn

Many real-world optimization problems require significant resources for objective function evaluations. This is a challenge to evolutionary algorithms, as it limits the number of available evaluations. One solution are surrogate models, which replace the expensive objective.


international conference on evolutionary multi criterion optimization | 2017

First Investigations on Noisy Model-Based Multi-objective Optimization

Daniel Horn; Melanie Dagge; Xudong Sun; Bernd Bischl

In many real-world applications concerning multi-objective optimization, the true objective functions are not observable. Instead, only noisy observations are available. In recent years, the interest in the effect of such noise in evolutionary multi-objective optimization EMO has increased and many specialized algorithms have been proposed. However, evolutionary algorithms are not suitable if the evaluation of the objectives is expensive and only a small budget is available. One popular solution is to use model-based multi-objective optimization MBMO techniques. In this paper, we present a first investigation on noisy MBMO. For this purpose we collect several noise handling strategies from the field of EMO and adapt them for MBMO algorithms. We compare the performance of those strategies in two benchmark situations: Firstly, we perform a purely artificial benchmark using homogeneous Gaussian noise. Secondly, we choose a setting from the field of machine learning, where the structure of the underlying noise is unknown.


parallel problem solving from nature | 2016

Multi-objective Selection of Algorithm Portfolios: Experimental Validation

Daniel Horn; Karin Schork; Tobias Wagner

The selection of algorithms to build portfolios represents a multi-objective problem. From a possibly large pool of algorithm candidates, a portfolio of limited size but good quality over a wide range of problems is desired. Possible applications can be found in the context of machine learning, where the accuracy and runtime of different learning techniques must be weighed. Each algorithm is represented by its Pareto front, which has been approximated in an a priori parameter tuning. Our approach for multi-objective selection of algorithm portfolios (MOSAP) is capable to trade-off the number of algorithm candidates and the respective quality of the portfolio. The quality of the portfolio is defined as the distance to the joint Pareto front of all algorithm candidates. By means of a decision tree, also the selection of the right algorithm is possible based on the characteristics of the problem.


Archive | 2016

Big Data Big data Classification Classification : Aspects on Many Features Many features and Many Observations

Claus Weihs; Daniel Horn; Bernd Bischl

In this paper we discuss the performance of classical classification methods on Big Data. We distinguish the cases many features and many observations. For the many features case we look at projection methods, distance-based methods, and feature selection. For the many observations case we mainly consider subsampling. The examples in this paper show that standard classification methods should not be blindly applied to Big Data.


Archive | 2016

Big DataBig data ClassificationClassification: Aspects on Many FeaturesMany features and Many Observations

Claus Weihs; Daniel Horn; Bernd Bischl

In this paper we discuss the performance of classical classification methods on Big Data. We distinguish the cases many features and many observations. For the many features case we look at projection methods, distance-based methods, and feature selection. For the many observations case we mainly consider subsampling. The examples in this paper show that standard classification methods should not be blindly applied to Big Data.

Collaboration


Dive into the Daniel Horn's collaboration.

Top Co-Authors

Avatar

Claus Weihs

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michel Lang

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tobias Wagner

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Jakob Richter

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Karin Schork

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Dirk Biermann

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Klaus Friedrichs

Technical University of Dortmund

View shared research outputs
Researchain Logo
Decentralizing Knowledge