Kurt Hornik
Vienna University of Economics and Business
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kurt Hornik.
Neural Networks | 1989
Kurt Hornik; Maxwell B. Stinchcombe; Halbert White
Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.
Genome Biology | 2004
Robert Gentleman; Vincent J. Carey; Douglas M. Bates; Ben Bolstad; Marcel Dettling; Sandrine Dudoit; Byron Ellis; Laurent Gautier; Yongchao Ge; Jeff Gentry; Kurt Hornik; Torsten Hothorn; Wolfgang Huber; Stefano M. Iacus; Rafael A. Irizarry; Friedrich Leisch; Cheng Li; Martin Maechler; Anthony Rossini; Gunther Sawitzki; Colin A. Smith; Gordon K. Smyth; Luke Tierney; Jean Yee Hwa Yang; Jianhua Zhang
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.
Neural Networks | 1991
Kurt Hornik
Abstract We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.
Journal of Computational and Graphical Statistics | 2006
Torsten Hothorn; Kurt Hornik; Achim Zeileis
Recursive binary partitioning is a popular tool for regression analysis. Two fundamental problems of exhaustive search procedures usually applied to fit such models have been known for a long time: overfitting and a selection bias towards covariates with many possible splits or missing values. While pruning procedures are able to solve the overfitting problem, the variable selection bias still seriously affects the interpretability of tree-structured regression models. For some special cases unbiased procedures have been suggested, however lacking a common theoretical foundation. We propose a unified framework for recursive partitioning which embeds tree-structured regression models into a well defined theory of conditional inference procedures. Stopping criteria based on multiple test procedures are implemented and it is shown that the predictive performance of the resulting trees is as good as the performance of established exhaustive search procedures. It turns out that the partitions and therefore the models induced by both approaches are structurally different, confirming the need for an unbiased variable selection. Moreover, it is shown that the prediction accuracy of trees with early stopping is equivalent to the prediction accuracy of pruned trees with unbiased variable selection. The methodology presented here is applicable to all kinds of regression problems, including nominal, ordinal, numeric, censored as well as multivariate response variables and arbitrary measurement scales of the covariates. Data from studies on glaucoma classification, node positive breast cancer survival and mammography experience are re-analyzed.
Neural Networks | 1990
Kurt Hornik; Maxwell B. Stinchcombe; Halbert White
A shoulder strap retainer having a base to be positioned on the exterior shoulder portion of a garment with securing means attached to the undersurface of the base for removably securing the base to the exterior shoulder portion of the garment. A flexible cover is provided in a common overlapping plane for defining a pocket to contain the shoulder strap with locking means for retaining the flexible cover releasably secured to the base to retain the strap in the pocket. Releasing means is provided for disengagement of a catch associated with the locking means so that the strap is positionable into or out of the pocket.
Neural Networks | 1989
Pierre Baldi; Kurt Hornik
Abstract We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed.
Neurocomputing | 2003
David Meyer; Friedrich Leisch; Kurt Hornik
Abstract Support vector machines (SVMs) are rarely benchmarked against other classification or regression methods. We compare a popular SVM implementation ( libsvm ) to 16 classification methods and 9 regression methods—all accessible through the software R —by the means of standard performance measures (classification error and mean squared error) which are also analyzed by the means of bias-variance decompositions. SVMs showed mostly good performances both on classification and regression tasks, but other methods proved to be very competitive.
Computational Statistics & Data Analysis | 2003
Achim Zeileis; Christian Kleiber; Walter Krämer; Kurt Hornik
The paper presents an approach to the analysis of data that contains (multiple) structural changes in a linear regression setup. We implement various strategies which have been suggested in the literature for testing against structural changes as well as a dynamic programming algorithm for the dating of the breakpoints in the R statistical software package. Using historical data on Nile river discharges, road casualties in Great Britain and oil prices in Germany it is shown that changes in the mean of a time series as well as in the coefficients of a linear regression are easily matched with identifiable historical, political or economic events.
Neural Networks | 1993
Kurt Hornik
We show that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function @j is locally Riemann integrable and nonpolynomial, and have universal L^p (@m) approximation capabilities for finite and compactly supported input environment measures @m provided that @j is locally bounded and nonpolynomial. In both cases, the input-to-hidden weights and hidden layer biases can be constrained to arbitrarily small sets; if in addition @j is locally analytic a single universal bias will do.
The American Statistician | 2006
Torsten Hothorn; Kurt Hornik; Mark A. van de Wiel; Achim Zeileis
Conditioning on the observed data is an important and flexible design principle for statistical test procedures. Although generally applicable, permutation tests currently in use are limited to the treatment of special cases, such as contingency tables or K-sample problems. A new theoretical framework for permutation tests opens up the way to a unified and generalized view. This article argues that the transfer of such a theory to practical data analysis has important implications in many applications and requires tools that enable the data analyst to compute on the theoretical concepts as closely as possible. We reanalyze four datasets by adapting the general conceptual framework to these challenging inference problems and using the coin add-on package in the R system for statistical computing to show what one can gain from going beyond the “classical” test procedures.