Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Biehl is active.

Publication


Featured researches published by Michael Biehl.


Neural Computation | 2009

Adaptive relevance matrices in learning vector quantization

Petra Schneider; Michael Biehl; Barbara Hammer

We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototype-based classification algorithm, toward a general adaptive metric. By introducing a full matrix of relevance factors in the distance measure, correlations between different features and their importance for the classification scheme can be taken into account and automated, and general metric adaptation takes place during training. In comparison to the weighted Euclidean metric used in RLVQ and its variations, a full matrix is more powerful to represent the internal structure of the data appropriately. Large margin generalization bounds can be transferred to this case, leading to bounds that are independent of the input dimensionality. This also holds for local metrics attached to each prototype, which corresponds to piecewise quadratic decision boundaries. The algorithm is tested in comparison to alternative learning vector quantization schemes using an artificial data set, a benchmark multiclass problem from the UCI repository, and a problem from bioinformatics, the recognition of splice sites for C.elegans.


The Journal of Clinical Endocrinology and Metabolism | 2011

Urine Steroid Metabolomics as a Biomarker Tool for Detecting Malignancy in Adrenal Tumors

Wiebke Arlt; Michael Biehl; Angela E. Taylor; Stefanie Hahner; Rossella Libé; Beverly Hughes; Petra Schneider; David J. Smith; Han Stiekema; Nils Krone; Emilio Porfiri; Giuseppe Opocher; Jérôme Bertherat; Franco Mantero; Bruno Allolio; Massimo Terzolo; Peter Nightingale; Cedric Shackleton; Xavier Bertagna; Martin Fassnacht; Paul M. Stewart

Context: Adrenal tumors have a prevalence of around 2% in the general population. Adrenocortical carcinoma (ACC) is rare but accounts for 2–11% of incidentally discovered adrenal masses. Differentiating ACC from adrenocortical adenoma (ACA) represents a diagnostic challenge in patients with adrenal incidentalomas, with tumor size, imaging, and even histology all providing unsatisfactory predictive values. Objective: Here we developed a novel steroid metabolomic approach, mass spectrometry-based steroid profiling followed by machine learning analysis, and examined its diagnostic value for the detection of adrenal malignancy. Design: Quantification of 32 distinct adrenal derived steroids was carried out by gas chromatography/mass spectrometry in 24-h urine samples from 102 ACA patients (age range 19–84 yr) and 45 ACC patients (20–80 yr). Underlying diagnosis was ascertained by histology and metastasis in ACC and by clinical follow-up [median duration 52 (range 26–201) months] without evidence of metastasis in ACA. Steroid excretion data were subjected to generalized matrix learning vector quantization (GMLVQ) to identify the most discriminative steroids. Results: Steroid profiling revealed a pattern of predominantly immature, early-stage steroidogenesis in ACC. GMLVQ analysis identified a subset of nine steroids that performed best in differentiating ACA from ACC. Receiver-operating characteristics analysis of GMLVQ results demonstrated sensitivity = specificity = 90% (area under the curve = 0.97) employing all 32 steroids and sensitivity = specificity = 88% (area under the curve = 0.96) when using only the nine most differentiating markers. Conclusions: Urine steroid metabolomics is a novel, highly sensitive, and specific biomarker tool for discriminating benign from malignant adrenal tumors, with obvious promise for the diagnostic work-up of patients with adrenal incidentalomas.


EPL | 1989

The AdaTron: An Adaptive Perceptron Algorithm

J.K. Anlauf; Michael Biehl

A new learning algorithm for neural networks of spin glass type is proposed. It is found to relax exponentially towards the perceptron of optimal stability using the concept of adaptive learning. The patterns can be presented either sequentially or in parallel. A prove of convergence is given and the methods performance is studied numerically.


Monthly Notices of the Royal Astronomical Society | 2010

Post-correlation radio frequency interference classification methods

A. R. Offringa; de Antonius Bruyn; Michael Biehl; Saleem Zaroubi; G. Bernardi; V. N. Pandey

We describe and compare several post-correlation radio frequency interference (RFI) classification methods. As data sizes of observations grow with new and improved telescopes, the need for completely automated, robust methods for RFI mitigation is pressing. We investigated several classification methods and find that, for the data sets we used, the most accurate among them is the SumThreshold method. This is a new method formed from a combination of existing techniques, including a new way of thresholding. This iterative method estimates the astronomical signal by carrying out a surface fit in the time-frequency plane. With a theoretical accuracy of 95 per cent recognition and an approximately 0.1 per cent false probability rate in simple simulated cases, the method is in practice as good as the human eye in finding RFI. In addition, it is fast, robust, does not need a data model before it can be executed and works in almost all configurations with its default parameters. The method has been compared using simulated data with several other mitigation techniques, including one based upon the singular value decomposition of the time-frequency matrix, and has shown better results than the rest.


intelligent data engineering and automated learning | 2007

Analysis of tiling microarray data by learning vector quantization and relevance learning

Michael Biehl; Rainer Breitling; Yang Li

We apply learning vector quantization to the analysis of tiling microarray data. As an example we consider the classification of C. elegans genomic probes as intronic or exonic. Training is based on the current annotation of the genome. Relevance learning techniques are used to weight and select features according to their importance for the classification. Among other findings, the analysis suggests that correlations between the perfect match intensity of a particular probe and its neighbors are highly relevant for successful exon identification.


Neural Computation | 2009

Distance learning in discriminative vector quantization

Petra Schneider; Michael Biehl; Barbara Hammer

Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.


Neural Networks | 2012

Limited Rank Matrix Learning, discriminative dimension reduction and visualization

Kerstin Bunte; Petra Schneider; Barbara Hammer; Frank-Michael Schleif; Thomas Villmann; Michael Biehl

We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method.


IEEE Transactions on Neural Networks | 2010

Regularization in Matrix Relevance Learning

Petra Schneider; Kerstin Bunte; Han Stiekema; Barbara Hammer; Thomas Villmann; Michael Biehl

In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training. We focus on matrix learning in generalized LVQ (GLVQ). Extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore, we apply the scheme to benchmark classification data sets from the UCI Repository of Machine Learning. We demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e., matrix learning with an implicit, low-dimensional representation of the data.


Pattern Recognition | 2011

Learning effective color features for content based image retrieval in dermatology

Kerstin Bunte; Michael Biehl; Marcel F. Jonkman; Nicolai Petkov

We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn favorable feature representations for this special application: limited rank matrix learning vector quantization (LiRaM LVQ) and a Large Margin Nearest Neighbor (LMNN) approach. Both methods use labeled training data and provide a discriminant linear transformation of the original features, potentially to a lower dimensional space. The extracted color features are used to retrieve images from a database by a k-nearest neighbor search. We perform a comparison of retrieval rates achieved with extracted and original features for eight different standard color spaces. We achieved significant improvements in every examined color space. The increase of the mean correct retrieval rate lies between 10% and 27% in the range of k=1-25 retrieved images, and the correct retrieval rate lies between 84% and 64%. We present explicit combinations of RGB and CIE-Lab color features corresponding to healthy and lesion skin. LiRaM LVQ and the computationally more expensive LMNN give comparable results for large values of the method parameter @k of LMNN (@k>=25) while LiRaM LVQ outperforms LMNN for smaller values of @k. We conclude that feature extraction by LiRaM LVQ leads to considerable improvement in color-based retrieval of dermatologic images.


Journal of Physics A | 1995

Learning by on-line gradient descent

Michael Biehl; Holm Schwarze

We study on-line gradient-descent learning in multilayer networks analytically and numerically. The training is based on randomly drawn inputs and their corresponding outputs as defined by a target rule. In the thermodynamic limit we derive deterministic differential equations for the order parameters of the problem which allow an exact calculation of the evolution of the generalization error. First we consider a single-layer perceptron with sigmoidal activation function learning a target rule defined by a network of the same architecture. For this model the generalization error decays exponentially with the number of training examples if the learning rate is sufficiently small. However, if the learning rate is increased above a critical value, perfect learning is no longer possible. For architectures with hidden layers and fixed hidden-to-output weights, such as the parity and the committee machine, we find additional effects related to the existence of symmetries in these problems.

Collaboration


Dive into the Michael Biehl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerstin Bunte

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wiebke Arlt

Queen Elizabeth Hospital Birmingham

View shared research outputs
Top Co-Authors

Avatar

Anarta Ghosh

University of Groningen

View shared research outputs
Top Co-Authors

Avatar

Beverly Hughes

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge