Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Heskes is active.

Publication


Featured researches published by Tom Heskes.


Oja, E. (ed.), Kohonen Maps | 1999

Energy functions for self-organizing maps

Tom Heskes

Publisher Summary By changing the definition of the winning unit, Kohonens original learning rule can be viewed as performing stochastic gradient descent on an energy function. It is showed in two ways: by explicitly computing derivatives and as a limiting case of a “soft” version of self-organizing maps with probabilistic winner assignments. Kinks in a one-dimensional map and twists in a two-dimensional map correspond to local minima in the energy landscape of the network weights. Changing the determination of the winning unit has no effect on the basic properties of the Kohonen learning algorithm, which is a relatively simple procedure with remarkable self-organizing capabilities. At a more theoretical level, many results concerning the original Kohonen learning algorithm are not particularly surprising from a stochastic approximation or an optimization point of view. Often, these results are difficult to proof for technical reasons.


PLOS Computational Biology | 2015

MAGMA: Generalized Gene-Set Analysis of GWAS Data

Christiaan de Leeuw; Joris M. Mooij; Tom Heskes; Danielle Posthuma

By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn’s Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn’s Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn’s Disease data was found to be considerably faster as well.


IEEE Transactions on Neural Networks | 2001

Self-organizing maps, vector quantization, and mixture modeling

Tom Heskes

Self-organizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive expectation-maximization (EM) algorithms for self-organizing maps with and without missing values. We compare self-organizing maps with the elastic-net approach and explain why the former is better suited for the visualization of high-dimensional data. Several extensions and improvements are discussed. As an illustration we apply a self-organizing map based on a multinomial distribution to market basket analysis.


Neural Computation | 2004

On the uniqueness of loopy belief propagation fixed points

Tom Heskes

We derive sufficient conditions for the uniqueness of loopy belief propagation fixed points. These conditions depend on both the structure of the graph and the strength of the potentials and naturally extend those for convexity of the Bethe free energy. We compare them with (a strengthened version of) conditions derived elsewhere for pairwise potentials. We discuss possible implications for convergent algorithms, as well as for other approximate free energies.


NeuroImage | 2010

Efficient Bayesian multivariate fMRI analysis using a sparsifying spatio-temporal prior

Marcel A. J. van Gerven; Botond Cseke; Floris P. de Lange; Tom Heskes

Bayesian logistic regression with a multivariate Laplace prior is introduced as a multivariate approach to the analysis of neuroimaging data. It is shown that, by rewriting the multivariate Laplace distribution as a scale mixture, we can incorporate spatio-temporal constraints which lead to smooth importance maps that facilitate subsequent interpretation. The posterior of interest is computed using an approximate inference method called expectation propagation and becomes feasible due to fast inversion of a sparse precision matrix. We illustrate the performance of the method on an fMRI dataset acquired while subjects were shown handwritten digits. The obtained models perform competitively in terms of predictive performance and give rise to interpretable importance maps. Estimation of the posterior of interest is shown to be feasible even for very large models with thousands of variables.


Journal of Artificial Intelligence Research | 2006

Convexity arguments for efficient minimization of the Bethe and Kikuchi free energies

Tom Heskes

Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms have been shown to correspond to extrema of the Bethe and Kikuchi free energy, both of which are approximations of the exact Helmholtz free energy. However, belief propagation does not always converge, which motivates approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically non-convex constrained minimization problem through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.


Neural Computation | 1998

Bias/variance decompositions for likelihood-based estimators

Tom Heskes

The bias/variance decomposition of mean-squared error is well understood and relatively straightforward. In this note, a similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood.


North-holland Mathematical Library | 1993

On-line learning processes in artificial neural networks

Tom Heskes; Hilbert J. Kappen

We study on-line learning processes in artificial neural networks from a general point of view. On-line learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuous-time master equation. On-line learning is necessary if not all training patterns are available all the time. This occurs in many applications when the training patterns are drawn from a time-dependent environmental distribution. Studying learning in a changing environment, we encounter a conflict between the adaptability and the confidence of the networks representation. Minimization of a criterion incorporating both effects yields an algorithm for on-line adaptation of the learning parameter. The inherent noise of on-line learning makes it possible to escape from undesired local minima of the error potential on which the learning rule performs (stochastic) gradient descent. We try to quantify these often made claims by considering the transition times between various minima. We apply our results on the transitions from “twists” in two-dimensional self-organizing maps to perfectly ordered configurations. Finally, we discuss the capabilities of on-line learning for global optimization.


Nature Reviews Genetics | 2016

The statistical properties of gene-set analysis.

Christiaan de Leeuw; Benjamin M. Neale; Tom Heskes; Danielle Posthuma

The rapid increase in loci discovered in genome-wide association studies has created a need to understand the biological implications of these results. Gene-set analysis provides a means of gaining such understanding, but the statistical properties of gene-set analysis are not well understood, which compromises our ability to interpret its results. In this Analysis article, we provide an extensive statistical evaluation of the core structure that is inherent to all gene- set analyses and we examine current implementations in available tools. We show which factors affect valid and successful detection of gene sets and which provide a solid foundation for performing and interpreting gene-set analysis.


Neural Networks | 1997

Task-dependent learning of attention

Piërre van de Laar; Tom Heskes; Stan C. A. M. Gielen

In this article, we propose a neural network model for selective covert visual attention. This model can learn to focus its attention on important features depending on the task to be fulfilled by gating the flow of information from the lower to the higher levels of the visual system. The model is kept as simple as possible, but it is still capable of reproducing attentional behavior observed in psychological experiments. Computer simulations demonstrate that (1) it can learn categories to reduce reaction time without a decrease in performance, (2) the model reveals a performance similar to that of humans in feature and conjunction search, and (3) its learning dynamics are comparable with those of humans. Copyright 1997 Elsevier Science Ltd.

Collaboration


Dive into the Tom Heskes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Perry Groot

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Wim Wiegerinck

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Tom Claassen

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hilbert J. Kappen

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Adriana Birlutiu

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Bart Bakker

Radboud University Nijmegen

View shared research outputs
Researchain Logo
Decentralizing Knowledge