Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniela Hofmann is active.

Publication


Featured researches published by Daniela Hofmann.


Neurocomputing | 2014

Learning interpretable kernelized prototype-based models

Daniela Hofmann; Frank-Michael Schleif; Benjamin Paaßen; Barbara Hammer

Abstract Since they represent a model in terms of few typical representatives, prototype based learning such as learning vector quantization (LVQ) constitutes a directly interpretable machine learning technique. Recently, several LVQ schemes have been extended towards a kernelized or dissimilarity based version which can be applied if data are represented by pairwise similarities or dissimilarities only. This opens the way towards its application in domains where data are typically not represented in vectorial form. Albeit kernel LVQ still represents models by typical prototypes, interpretability is usually lost this way: since no vector space model is available, prototypes are represented indirectly in terms of combinations of data. In this contribution, we extend a recent kernel LVQ scheme by sparse approximations to overcome this problem: instead of the full coefficient vectors, few exemplars which represent the prototypes can be directly inspected by practitioners in the same way as data in this case. For this purpose, we investigate different possibilities to approximate a prototype by a sparse counterpart during or after training relying on different heuristics or approximation algorithms, respectively, in particular sparsity constraints while training, geometric approaches, orthogonal matching pursuit, and core techniques for the minimum enclosing ball problem. We discuss the behavior of these methods in several benchmark problems as concerns quality, sparsity, and interpretability, and we propose different measures how to quantitatively evaluate the performance of the approaches.


intelligent data analysis | 2012

Discriminative dimensionality reduction mappings

Andrej Gisbrecht; Daniela Hofmann; Barbara Hammer

Discriminative dimensionality reduction aims at a low dimensional, usually nonlinear representation of given data such that information as specified by auxiliary discriminative labeling is presented as accurately as possible. This paper centers around two open problems connected to this question: (i) how to evaluate discriminative dimensionality reduction quantitatively? (ii) how to arrive at explicit nonlinear discriminative dimensionality reduction mappings? Based on recent work for the unsupervised case, we propose an evaluation measure and an explicit discriminative dimensionality reduction mapping using the Fisher information.


artificial neural networks in pattern recognition | 2012

Kernel robust soft learning vector quantization

Daniela Hofmann; Barbara Hammer

Prototype-based classification schemes offer very intuitive and flexible classifiers with the benefit of easy interpretability of the results and scalability of the model complexity. Recent prototype-based models such as robust soft learning vector quantization (RSLVQ) have the benefit of a solid mathematical foundation of the learning rule and decision boundaries in terms of probabilistic models and corresponding likelihood optimization. In its original form, they can be used for standard Euclidean vectors only. In this contribution, we extend RSLVQ towards a kernelized version which can be used for any positive semidefinite data matrix. We demonstrate the superior performance of the technique, kernel RSLVQ, in a variety of benchmarks where results competitive or even superior to state-of-the-art support vector machines are obtained.


Neurocomputing | 2015

Efficient approximations of robust soft learning vector quantization for non-vectorial data

Daniela Hofmann; Andrej Gisbrecht; Barbara Hammer

Due to its intuitive learning algorithms and classification behavior, learning vector quantization (LVQ) enjoys a wide popularity in diverse application domains. In recent years, the classical heuristic schemes have been accompanied by variants which can be motivated by a statistical framework such as robust soft LVQ (RSLVQ). In its original form, LVQ and RSLVQ can be applied to vectorial data only, making it unsuitable for complex data sets described in terms of pairwise relations only. In this contribution, we address kernel RSLVQ which extends its applicability to data which are described by a general Gram matrix. While leading to state of the art results, this extension has the drawback that models are no longer sparse, and quadratic training complexity is encountered due to the dependency of the method on the full Gram matrix. In this contribution, we investigate the performance of a speed-up of training by means of low rank approximations of the Gram matrix, and we investigate how sparse models can be enforced in this context. It turns out that an efficient Nystrom approximation can be used if data are intrinsically low dimensional, a property which can be efficiently checked by sampling the variance of the approximation prior to training. Further, all models enable sparse approximations of comparable quality as the full models using simple geometric approximation schemes only. We demonstrate the behavior of these approximations in a couple of benchmarks.


computational intelligence and data mining | 2014

Valid interpretation of feature relevance for linear data mappings

Benoît Frénay; Daniela Hofmann; Alexander Schulz; Michael Biehl; Barbara Hammer

Linear data transformations constitute essential operations in various machine learning algorithms, ranging from linear regression up to adaptive metric transformation. Often, linear scalings are not only used to improve the model accuracy, rather feature coefficients as provided by the mapping are interpreted as an indicator for the relevance of the feature for the task at hand. This principle, however, can be misleading in particular for high-dimensional or correlated features, since it easily marks irrelevant features as relevant or vice versa. In this contribution, we propose a mathematical formalisation of the minimum and maximum feature relevance for a given linear transformation which can efficiently be solved by means of linear programming. We evaluate the method in several benchmarks, where it becomes apparent that the minimum and maximum relevance closely resembles what is often referred to as weak and strong relevance of the features; hence unlike the mere scaling provided by the linear mapping, it ensures valid interpretability.


WSOM | 2013

Efficient Approximations of Kernel Robust Soft LVQ

Daniela Hofmann; Andrej Gisbrecht; Barbara Hammer

Robust soft learning vector quantization (RSLVQ) constitutes a probabilistic extension of learning vector quantization (LVQ) based on a labeled Gaussian mixture model of the data. Training optimizes the likelihood ratio of the model and recovers a variant similar to LVQ2.1 in the limit of small bandwidth. Recently, RSLVQ has been extended to a kernel version, thus opening the way towards more general data structures characterized in terms of a Gram matrix only. While leading to state of the art results, this extension has the drawback that models are no longer sparse, and quadratic training complexity is encountered. In this contribution, we investigate two approximation schemes which lead to sparse models: k-approximations of the prototypes and the Nystrom approximation of the Gram matrix. We investigate the behavior of these approximations in a couple of benchmarks.


Neurocomputing | 2014

Learning vector quantization for (dis-)similarities

Barbara Hammer; Daniela Hofmann; Frank-Michael Schleif; Xibin Zhu


the european symposium on artificial neural networks | 2013

Sparse approximations for kernel learning vector quantization

Daniela Hofmann; Barbara Hammer


Workshop NC^2 2012 | 2012

Discriminative probabilistic prototype based models in kernel space

Daniela Hofmann; Andrej Gisbrecht; Barbara Hammer


Archive | 2016

Learning vector quantization for proximity data

Daniela Hofmann

Collaboration


Dive into the Daniela Hofmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benoît Frénay

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge