Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sang-Hyeun Park is active.

Publication


Featured researches published by Sang-Hyeun Park.


european conference on machine learning | 2007

Efficient Pairwise Classification

Sang-Hyeun Park; Johannes Fürnkranz

Pairwise classification is a class binarization procedure that converts a multi-class problem into a series of two-class problems, one problem for each pair of classes. While it can be shown that for training, this procedure is more efficient than the more commonly used one-against-all approach, it still has to evaluate a quadratic number of classifiers when computing the predicted class for a given example. In this paper, we propose a method that allows a faster computation of the predicted class when weighted or unweighted voting are used for combining the predictions of the individual classifiers. While its worst-case complexity is still quadratic in the number of classes, we show that even in the case of completely random base classifiers, our method still outperforms the conventional pairwise classifier. For the more practical case of well-trained base classifiers, its asymptotic computational complexity seems to be almost linear.


Machine Learning | 2012

Preference-based reinforcement learning: a formal framework and a policy iteration algorithm

Johannes Fürnkranz; Eyke Hüllermeier; Weiwei Cheng; Sang-Hyeun Park

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a preference-based approach to reinforcement learning is the observation that in many real-world domains, numerical feedback signals are not readily available, or are defined arbitrarily in order to satisfy the needs of conventional RL algorithms. Instead, we propose an alternative framework for reinforcement learning, in which qualitative reward signals can be directly used by the learner. The framework may be viewed as a generalization of the conventional RL framework in which only a partial order between policies is required instead of the total order induced by their respective expected long-term reward.Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. As a proof of concept, we realize a first simple instantiation of this framework that defines preferences based on utilities observed for trajectories. To that end, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of preference-based approximate policy iteration are illustrated by means of two case studies.


Data Mining and Knowledge Discovery | 2012

Efficient prediction algorithms for binary decomposition techniques

Sang-Hyeun Park; Johannes Fürnkranz

Binary decomposition methods transform multiclass learning problems into a series of two-class learning problems that can be solved with simpler learning algorithms. As the number of such binary learning problems often grows super-linearly with the number of classes, we need efficient methods for computing the predictions. In this article, we discuss an efficient algorithm that queries only a dynamically determined subset of the trained classifiers, but still predicts the same classes that would have been predicted if all classifiers had been queried. The algorithm is first derived for the simple case of pairwise classification, and then generalized to arbitrary pairwise decompositions of the learning problem in the form of ternary error-correcting output codes under a variety of different code designs and decoding strategies.


european conference on machine learning | 2009

Efficient Decoding of Ternary Error-Correcting Output Codes for Multiclass Classification

Sang-Hyeun Park; Johannes Fürnkranz

We present an adaptive decoding algorithm for ternary ECOC matrices which reduces the number of needed classifier evaluations for multiclass classification. The resulting predictions are guaranteed to be equivalent with the original decoding strategy except for ambiguous final predictions. The technique works for Hamming Decoding and several commonly used alternative decoding strategies. We show its effectiveness in an extensive empirical evaluation considering various code design types: Nearly in all cases, a considerable reduction is possible. We also show that the performance gain depends on the sparsity and the dimension of the ECOC coding matrix.


Machine Learning | 2014

Efficient implementation of class-based decomposition schemes for Naïve Bayes

Sang-Hyeun Park; Johannes Fürnkranz

Previous studies have shown that the classification accuracy of a Naïve Bayes classifier in the domain of text-classification can often be improved using binary decompositions such as error-correcting output codes (ECOC). The key contribution of this short note is the realization that ECOC and, in fact, all class-based decomposition schemes, can be efficiently implemented in a Naïve Bayes classifier, so that—because of the additive nature of the classifier—all binary classifiers can be trained in a single pass through the data. In contrast to the straight-forward implementation, which has a complexity of O(n⋅t⋅g), the proposed approach improves the complexity to O((n+t)⋅g). Large-scale learning of ensemble approaches with Naïve Bayes can benefit from this approach, as the experimental results shown in this paper demonstrate.


discovery science | 2010

Exploiting code redundancies in ECOC

Sang-Hyeun Park; Lorenz Weizsäcker; Johannes Fürnkranz

We study an approach for speeding up the training of error-correcting output codes (ECOC) classifiers. The key idea is to avoid unnecessary computations by exploiting the overlap of the different training sets in the ECOC ensemble. Instead of re-training each classifier from scratch, classifiers that have been trained for one task can be adapted to related tasks in the ensemble. The crucial issue is the identification of a schedule for training the classifiers which maximizes the exploitation of the overlap. For solving this problem, we construct a classifier graph in which the nodes correspond to the classifiers, and the edges represent the training complexity for moving from one classifier to the next in terms of the number of added training examples. The solution of the Steiner Tree problem is an arborescence in this graph which describes the learning scheme with the minimal total training complexity. We experimentally evaluate the algorithm with Hoeffding trees, as an example for incremental learners where the classifier adaptation is trivial, and with SVMs, where we employ an adaptation strategy based on adapted caching and weight reuse, which guarantees that the learned model is the same as per batch learning.


KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence | 2009

An exploitative Monte-Carlo poker agent

Immanuel Schweizer; Kamill Panitzek; Sang-Hyeun Park; Johannes Fürnkranz

We describe the poker agent AKI-REALBOT which participated in the 6-player Limit Competition of the third Annual AAAI Computer Poker Challenge in 2008. It finished in second place, its performance being mostly due to its superior ability to exploit weaker bots. This paper describes the architecture of the program and the Monte-Carlo decision tree-based decision engine that was used to make the bots decision. It will focus the attention on the modifications which made the bot successful in exploiting weaker bots.


discovery science | 2012

Error-Correcting Output Codes as a Transformation from Multi-Class to Multi-Label Prediction

Johannes Fürnkranz; Sang-Hyeun Park

In this paper, we reinterpret error-correcting output codes (ECOCs) as a framework for converting multi-class classification problems into multi-label prediction problems. Different well-known multi-label learning approaches can be mapped upon particular ways of dealing with the original multi-class problem. For example, the label powerset approach obviously constitutes the inverse transformation from multi-label back to multi-class, whereas binary relevance learning may be viewed as the conventional way of dealing with ECOCs, in which each classifier is learned independently of the others. Consequently, we evaluate whether alternative choices for solving the multi-label problem may result in improved performance. This question is interesting because it is not clear whether approaches that do not treat the bits of the code words independently have sufficient error-correcting properties. Our results indicate that a slight but consistent advantage can be obtained with the use of multi-label methods, in particular when longer codes are employed.


Neurocomputing | 2010

Efficient voting prediction for pairwise multilabel classification

Eneldo Loza Mencía; Sang-Hyeun Park; Johannes Fürnkranz


european conference on machine learning | 2011

Preference-based policy iteration: leveraging preference learning for reinforcement learning

Weiwei Cheng; Johannes Fürnkranz; Eyke Hüllermeier; Sang-Hyeun Park

Collaboration


Dive into the Sang-Hyeun Park's collaboration.

Top Co-Authors

Avatar

Johannes Fürnkranz

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Eneldo Loza Mencía

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Immanuel Schweizer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Kamill Panitzek

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Lorenz Weizsäcker

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grigorios Tsoumakas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Katakis

National and Kapodistrian University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge