Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rikard König is active.

Publication


Featured researches published by Rikard König.


congress on evolutionary computation | 2013

Evolved decision trees as conformal predictors

Ulf Johansson; Rikard König; Tuve Löfström; Henrik Boström

In conformal prediction, predictive models output sets of predictions with a bound on the error rate. In classification, this translates to that the probability of excluding the correct class is lower than a predefined significance level, in the long run. Since the error rate is guaranteed, the most important criterion for conformal predictors is efficiency. Efficient conformal predictors minimize the number of elements in the output prediction sets, thus producing more informative predictions. This paper presents one of the first comprehensive studies where evolutionary algorithms are used to build conformal predictors. More specifically, decision trees evolved using genetic programming are evaluated as conformal predictors. In the experiments, the evolved trees are compared to decision trees induced using standard machine learning techniques on 33 publicly available benchmark data sets, with regard to predictive performance and efficiency. The results show that the evolved trees are generally more accurate, and the corresponding conformal predictors more efficient, than their induced counterparts. One important result is that the probability estimates of decision trees when used as conformal predictors should be smoothed, here using the Laplace correction. Finally, using the more discriminating Brier score instead of accuracy as the optimization criterion produced the most efficient conformal predictions.


international conference on information fusion | 2005

Automatically balancing accuracy and comprehensibility in predictive modeling

Ulf Johansson; Rikard König; Lars Niklasson

One specific problem, when performing predictive modeling, is the tradeoff between accuracy and comprehensibility. When comprehensible models are required, this normally rules out high-accuracy techniques like neural networks and committee machines. Therefore, an automated choice of a standard technique, known to generally produce sufficiently accurate and comprehensible models, would be of great value. In this paper, it is argued that this requirement is met by an ensemble of classifiers, followed by rule extraction. The proposed technique is demonstrated, using an ensemble of common classifiers and our rule extraction algorithm G-REX, on 17 publicly available data sets. The results presented demonstrate that the suggested technique performs very well. More specifically, the ensemble clearly outperforms the individual classifiers regarding accuracy, while the extracted models have accuracy similar to the individual classifiers. The extracted models are, however, significantly more compact than corresponding models created directly from the data set using the standard tool CART; thus providing higher comprehensibility.


congress on evolutionary computation | 2007

Genetic programming - a tool for flexible rule extraction

Rikard König; Ulf Johansson; Lars Niklasson

Although data mining is performed to support decision making, many of the most powerful techniques, like neural networks and ensembles, produce opaque models. This lack of interpretability is an obvious disadvantage, since decision makers normally require some sort of explanation before taking action. To achieve comprehensibility, accuracy is often sacrificed by the use of simpler, transparent models, such as decision trees. Another alternative is rule extraction; i.e. to transform the opaque model into a comprehensible model, keeping acceptable accuracy. We have previously suggested a rule extraction algorithm named G-REX, which is based on genetic programming. One key property of G-REX, due to the use of genetic programming, is the possibility to use different representation languages. In this study we apply G-REX to estimation tasks. More specifically, three representation languages are evaluated using eight publicly available data sets. The quality of the extracted rules is compared to two standard techniques producing comprehensible models; multiple linear regression and the decision tree algorithm C&RT. The results show that G-REX outperforms the standard techniques, but that the choice of representation language is important.


international joint conference on neural network | 2006

Building Neural Network Ensembles using Genetic Programming

Ulf Johansson; Tuve Löfström; Rikard König; Lars Niklasson

In this paper we present and evaluate a novel algorithm for ensemble creation. The main idea of the algorithm is to first independently train a fixed number of neural networks (here ten) and then use genetic programming to combine these networks into an ensemble. The use of genetic programming makes it possible to not only consider ensembles of different sizes, but also to use ensembles as intermediate building blocks. The final result is therefore more correctly described as an ensemble of neural network ensembles. The experiments show that the proposed method, when evaluated on 22 publicly available data sets, obtains very high accuracy, clearly outperforming the other methods evaluated. In this study several micro techniques are used, and we believe that they all contribute to the increased performance. One such micro technique, aimed at reducing overtraining, is the training method, called tombola training, used during genetic evolution. When using tombola training, training data is regularly resampled into new parts, called training groups. Each ensemble is then evaluated on every training group and the actual fitness is determined solely from the result on the hardest part.


Data Mining | 2010

Genetically Evolved kNN Ensembles

Ulf Johansson; Rikard König; Lars Niklasson

Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. For the ensemble approach to work, base classifiers must not only be accurate but also diverse, i.e., they should commit their errors on different instances. Instance-based learners are, however, very robust with respect to variations of a data set, so standard resampling methods will normally produce only limited diversity. Because of this, instance-based learners are rarely used as base classifiers in ensembles. In this chapter, we introduce a method where genetic programming is used to generate kNN base classifiers with optimized k-values and feature weights. Due to the inherent inconsistency in genetic programming (i.e., different runs using identical data and parameters will still produce different solutions) a group of independently evolved base classifiers tend to be not only accurate but also diverse. In the experimentation, using 30 data sets from the UCI repository, two slightly different versions of kNN ensembles are shown to significantly outperform both the corresponding base classifiers and standard kNN with optimized k-values, with respect to accuracy and AUC.


genetic and evolutionary computation conference | 2010

Genetic rule extraction optimizing brier score

Ulf Johansson; Rikard König; Lars Niklasson

Most highly accurate predictive modeling techniques produce opaque models. When comprehensible models are required, rule extraction is sometimes used to generate a transparent model, based on the opaque. Naturally, the extracted model should be as similar as possible to the opaque. This criterion, called fidelity, is therefore a key part of the optimization function in most rule extracting algorithms. To the best of our knowledge, all existing rule extraction algorithms targeting fidelity use 0/1 fidelity, i.e., maximize the number of identical classifications. In this paper, we suggests and evaluate a rule extraction algorithm utilizing a more informed fidelity criterion. More specifically, the novel algorithms, which is based on genetic programming, minimizes the difference in probability estimates between the extracted and the opaque models, by using the generalized Brier score as fitness function. Experimental results from 26 UCI data sets show that the suggested algorithm obtained considerably higher accuracy and significantly better AUC than both the exact same rule extraction algorithm maximizing 0/1 fidelity, and the standard tree inducer J48. Somewhat surprisingly, rule extraction using the more informed fidelity metric normally resulted in less complex models, making sure that the improved predictive performance was not achieved on the expense of comprehensibility.


congress on evolutionary computation | 2010

Improving GP classification performance by injection of decision trees

Rikard König; Ulf Johansson; Tuve Löfström; Lars Niklasson

This paper presents a novel hybrid method combining genetic programming and decision tree learning. The method starts by estimating a benchmark level of reasonable accuracy, based on decision tree performance on bootstrap samples of the training set. Next, a normal GP evolution is started with the aim of producing an accurate GP. At even intervals, the best GP in the population is evaluated against the accuracy benchmark. If the GP has higher accuracy than the benchmark, the evolution continues normally until the maximum number of generations is reached. If the accuracy is lower than the benchmark, two things happen. First, the fitness function is modified to allow larger GPs, able to represent more complex models. Secondly, a decision tree with increased size and trained on a bootstrap of the training data is injected into the population. The experiments show that the hybrid solution of injecting decision trees into a GP population gives synergetic effects producing results that are better than using either technique separately. The results, from 18 UCI data sets, show that the proposed method clearly outperforms normal GP, and is significantly better than the standard decision tree algorithm.


international conference on artificial intelligence and soft computing | 2006

Genetically evolved trees representing ensembles

Ulf Johansson; Tuve Löfström; Rikard König; Lars Niklasson

We have recently proposed a novel algorithm for ensemble creation called GEMS (Genetic Ensemble Member Selection). GEMS first trains a fixed number of neural networks (here twenty) and then uses genetic programming to combine these networks into an ensemble. The use of genetic programming makes it possible for GEMS to not only consider ensembles of different sizes, but also to use ensembles as intermediate building blocks. In this paper, which is the first extensive study of GEMS, the representation language is extended to include tests partitioning the data, further increasing flexibility. In addition, several micro techniques are applied to reduce overfitting, which appears to be the main problem for this powerful algorithm. The experiments show that GEMS, when evaluated on 15 publicly available data sets, obtains very high accuracy, clearly outperforming both straightforward ensemble designs and standard decision tree algorithms.


international symposium on neural networks | 2003

Neural networks and rule extraction for prediction and explanation in the marketing domain

Ulf Johansson; Cecilia Sönströd; Rikard König; Lars Niklasson

This paper contains a case study where neural networks are used for prediction and explanation in the marketing domain. Initially, neural networks are used for regression and classification to predict the impact of advertising from money invested in different media categories. Rule extraction is then performed on the trained networks, using the G-REX method, which is based on genetic programming. Results show that both the neural nets and the extracted rules outperform the standard tool See5. G-REX combines high performance with keeping the rules short to ensure that they really provide explanation and not obfuscation.


european conference on applications of evolutionary computation | 2013

Evolving hierarchical temporal memory-based trading models

Patrick Gabrielsson; Rikard König; Ulf Johansson

We explore the possibility of using the genetic algorithm to optimize trading models based on the Hierarchical Temporal Memory (HTM) machine learning technology. Technical indicators, derived from intraday tick data for the E-mini S&P 500 futures market (ES), were used as feature vectors to the HTM models. All models were configured as binary classifiers, using a simple buy-and-hold trading strategy, and followed a supervised training scheme. The data set was partitioned into multiple folds to enable a modified cross validation scheme. Artificial Neural Networks (ANNs) were used to benchmark HTM performance. The results show that the genetic algorithm succeeded in finding predictive models with good performance and generalization ability. The HTM models outperformed the neural network models on the chosen data set and both technologies yielded profitable results with above average accuracy.

Collaboration


Dive into the Rikard König's collaboration.

Top Co-Authors

Avatar

Ulf Johansson

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cecilia Sönströd

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Peter Brattberg

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Patrick Gabrielsson

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cecilia Sönströd

Information Technology University

View shared research outputs
Researchain Logo
Decentralizing Knowledge