Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Hein is active.

Publication


Featured researches published by Matthias Hein.


conference on learning theory | 2005

From graphs to manifolds – weak and strong pointwise consistency of graph laplacians

Matthias Hein; Jean-Yves Audibert; Ulrike von Luxburg

In the machine learning community it is generally believed that graph Laplacians corresponding to a finite sample of data points converge to a continuous Laplace operator if the sample size increases. Even though this assertion serves as a justification for many Laplacian-based algorithms, so far only some aspects of this claim have been rigorously proved. In this paper we close this gap by establishing the strong pointwise consistency of a family of graph Laplacians with data-dependent weights to some weighted Laplace operator. Our investigation also includes the important case where the data lies on a submanifold of R d .


Nucleic Acids Research | 2012

An integer linear programming approach for finding deregulated subgraphs in regulatory networks.

Christina Backes; Alexander Rurainski; Gunnar W. Klau; Oliver Müller; Daniel Stöckel; Andreas Gerasch; Jan Küntzer; Daniela Maisel; Nicole Ludwig; Matthias Hein; Andreas Keller; Helmut Burtscher; Michael Kaufmann; Eckart Meese; Hans-Peter Lenhof

Deregulation of cell signaling pathways plays a crucial role in the development of tumors. The identification of such pathways requires effective analysis tools that facilitate the interpretation of expression differences. Here, we present a novel and highly efficient method for identifying deregulated subnetworks in a regulatory network. Given a score for each node that measures the degree of deregulation of the corresponding gene or protein, the algorithm computes the heaviest connected subnetwork of a specified size reachable from a designated root node. This root node can be interpreted as a molecular key player responsible for the observed deregulation. To demonstrate the potential of our approach, we analyzed three gene expression data sets. In one scenario, we compared expression profiles of non-malignant primary mammary epithelial cells derived from BRCA1 mutation carriers and of epithelial cells without BRCA1 mutation. Our results suggest that oxidative stress plays an important role in epithelial cells of BRCA1 mutation carriers and that the activation of stress proteins may result in avoidance of apoptosis leading to an increased overall survival of cells with genetic alterations. In summary, our approach opens new avenues for the elucidation of pathogenic mechanisms and for the detection of molecular key players.


IEEE Transactions on Antennas and Propagation | 2008

An Eigen-Analysis of Compact Antenna Arrays and Its Application to Port Decoupling

Christian Volmer; Jorn Weber; Ralf Stephan; Kurt Blau; Matthias Hein

Placing the radiators of antenna arrays closer than aggravates the problem of power mismatch. Based on efficiency considerations, a general analysis of this effect is presented, putting forward a simple tool to quantify, compare, and optimize the performance of antenna arrays. This analysis is not restricted with respect to the number of radiators or the degree of compactness. In order to improve power matching, a systematic approach for the design of lossless decoupling and matching networks based on 180 directional couplers is suggested for up to eight radiators. Implications of network losses, which have not yet received appropriate attention by researchers in the past, will be analyzed and discussed by means of a manufactured three-element prototype array.


international conference on machine learning | 2009

Spectral clustering based on the graph p -Laplacian

Thomas Bühler; Matthias Hein

We present a generalized version of spectral clustering using the graph p-Laplacian, a nonlinear generalization of the standard graph Laplacian. We show that the second eigenvector of the graph p-Laplacian interpolates between a relaxation of the normalized and the Cheeger cut. Moreover, we prove that in the limit as p → 1 the cut found by thresholding the second eigenvector of the graph p-Laplacian converges to the optimal Cheeger cut. Furthermore, we provide an efficient numerical scheme to compute the second eigenvector of the graph p-Laplacian. The experiments show that the clustering found by p-spectral clustering is at least as good as normal spectral clustering, but often leads to significantly better results.


international conference on machine learning | 2005

Intrinsic dimensionality estimation of submanifolds in R d

Matthias Hein; Jean-Yves Audibert

We present a new method to estimate the intrinsic dimensionality of a submanifold M in Rd from random samples. The method is based on the convergence rates of a certain U-statistic on the manifold. We solve at least partially the question of the choice of the scale of the data. Moreover the proposed method is easy to implement, can handle large data sets and performs very well even for small sample sizes. We compare the proposed method to two standard estimators on several artificial as well as real data sets.


Theoretical Computer Science | 2009

Optimal construction of k-nearest-neighbor graphs for identifying noisy clusters

Markus Maier; Matthias Hein; Ulrike von Luxburg

We study clustering algorithms based on neighborhood graphs on a random sample of data points. The question we ask is how such a graph should be constructed in order to obtain optimal clustering results. Which type of neighborhood graph should one choose, mutual k-nearest-neighbor or symmetric k-nearest-neighbor? What is the optimal parameter k? In our setting, clusters are defined as connected components of the t-level set of the underlying probability distribution. Clusters are said to be identified in the neighborhood graph if connected components in the graph correspond to the true underlying clusters. Using techniques from random geometric graph theory, we prove bounds on the probability that clusters are identified successfully, both in a noise-free and in a noisy setting. Those bounds lead to several conclusions. First, k has to be chosen surprisingly high (rather of the order n than of the order logn) to maximize the probability of cluster identification. Secondly, the major difference between the mutual and the symmetric k-nearest-neighbor graph occurs when one attempts to detect the most significant cluster only.


Electronic Journal of Statistics | 2013

Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization

Martin Slawski; Matthias Hein

Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization if the design matrix has additional properties, which are met in several applications of non-negative least squares (NNLS). We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso. We argue further that in specific cases, NNLS may have a better


Journal of Computer and System Sciences | 2005

Maximal margin classification for metric spaces

Matthias Hein; Olivier Bousquet; Bernhard Schölkopf

\ell_{\infty}


eurographics | 2008

Enhancement of bright video features for HDR displays

Piotr Didyk; Rafal Mantiuk; Matthias Hein; Hans-Peter Seidel

-rate in estimation and hence also advantages with respect to support recovery when combined with thresholding. From a practical point of view, NNLS does not depend on a regularization parameter and is hence easier to use.


computer vision and pattern recognition | 2016

Latent Embeddings for Zero-Shot Classification

Yongqin Xian; Zeynep Akata; Gaurav Sharma; Quynh Nguyen; Matthias Hein; Bernt Schiele

In order to apply the maximum margin method in arbitrary metric spaces, we suggest to embed the metric space into a Banach or Hilbert space and to perform linear classification in this space. We propose several embeddings and recall that an isometric embedding in a Banach space is always possible while an isometric embedding in a Hilbert space is only possible for certain metric spaces. As a result, we obtain a general maximum margin classification algorithm for arbitrary metric spaces (whose solution is approximated by an algorithm of Graepel et al. (International Conference on Artificial Neural Networks 1999, pp. 304-309)). Interestingly enough, the embedding approach, when applied to a metric which can be embedded into a Hilbert space, yields the support vector machine (SVM) algorithm, which emphasizes the fact that its solution depends on the metric and not on the kernel. Furthermore, we give upper bounds of the capacity of the function classes corresponding to both embeddings in terms of Rademacher averages. Finally, we compare the capacities of these function classes directly.

Collaboration


Dive into the Matthias Hein's collaboration.

Top Co-Authors

Avatar

Ralf Stephan

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Jens Müller

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

H. Piel

University of Wuppertal

View shared research outputs
Top Co-Authors

Avatar

Kurt Blau

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Stefan Humbla

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Reiner S. Thomä

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

U. Schwarz

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Christian Volmer

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

J. Trabert

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Frank Wollenschläger

Technische Universität Ilmenau

View shared research outputs
Researchain Logo
Decentralizing Knowledge