Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aapo Hyvärinen is active.

Publication


Featured researches published by Aapo Hyvärinen.


Neural Networks | 2000

Independent component analysis: algorithms and applications

Aapo Hyvärinen; Erkki Oja

A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.


IEEE Transactions on Neural Networks | 1999

Fast and robust fixed-point algorithms for independent component analysis

Aapo Hyvärinen

Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comons information-theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixed-point algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.


Neural Computation | 1997

A fast fixed-point algorithm for independent component analysis

Aapo Hyvärinen; Erkki Oja

We introduce a novel fast algorithm for independent component analysis, which can be used for blind source separation and feature extraction. We show how a neural network learning rule can be transformed into a fixedpoint iteration, which provides an algorithm that is very simple, does not depend on any user-defined parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm finds, one at a time, all nongaussian independent components, regardless of their probability distributions. The computations can be performed in either batch mode or a semiadaptive manner. The convergence of the algorithm is rigorously proved, and the convergence speed is shown to be cubic. Some comparisons to gradient-based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.


NeuroImage | 2004

Validating the independent components of neuroimaging time series via clustering and visualization.

Johan Himberg; Aapo Hyvärinen; Fabrizio Esposito

Recently, independent component analysis (ICA) has been widely used in the analysis of brain imaging data. An important problem with most ICA algorithms is, however, that they are stochastic; that is, their results may be somewhat different in different runs of the algorithm. Thus, the outputs of a single run of an ICA algorithm should be interpreted with some reserve, and further analysis of the algorithmic reliability of the components is needed. Moreover, as with any statistical method, the results are affected by the random sampling of the data, and some analysis of the statistical significance or reliability should be done as well. Here we present a method for assessing both the algorithmic and statistical reliability of estimated independent components. The method is based on running the ICA algorithm many times with slightly different conditions and visualizing the clustering structure of the obtained components in the signal space. In experiments with magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) data, the method was able to show that expected components are reliable; furthermore, it pointed out components whose interpretation was not obvious but whose reliability should incite the experimenter to investigate the underlying technical or physical phenomena. The method is implemented in a software package called Icasso.


International Journal of Neural Systems | 2000

A FAST FIXED-POINT ALGORITHM FOR INDEPENDENT COMPONENT ANALYSIS OF COMPLEX VALUED SIGNALS

Ella Bingham; Aapo Hyvärinen

Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued signals. In this article, it is assumed that the original, complex valued source signals are mutually statistically independent, and the problem is solved by the independent component analysis (ICA) model. ICA is a statistical method for transforming an observed multidimensional random vector into components that are mutually as independent as possible. In this article, a fast fixed-point type algorithm that is capable of separating complex valued, linearly mixed source signals is presented and its computational efficiency is shown by simulations. Also, the local consistency of the estimator given by the algorithm is proved.


international conference on artificial neural networks | 2009

Learning Features by Contrasting Natural Images with Noise

Michael U. Gutmann; Aapo Hyvärinen

Modeling the statistical structure of natural images is interesting for reasons related to neuroscience as well as engineering. Currently, this modeling relies heavily on generative probabilistic models. The estimation of such models is, however, difficult, especially when they consist of multiple layers. If the goal lies only in estimating the features, i.e. in pinpointing structure in natural images, one could also estimate instead a discriminative probabilistic model where multiple layers are more easily handled. For that purpose, we propose to estimate a classifier that can tell natural images apart from reference data which has been constructed to contain some known structure of natural images. The features of the classifier then reveal the interesting structure. Here, we use a classifier with one layer of features and reference data which contains the covariance-structure of natural images. We show that the features of the classifier are similar to those which are obtained from generative probabilistic models. Furthermore, we investigate the optimal shape of the nonlinearity that is used within the classifier.


Neural Computation | 2000

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces

Aapo Hyvärinen; Patrik O. Hoyer

Olshausen and Field (1996) applied the principle of independence maximization by sparse coding to extract features from natural images. This leads to the emergence of oriented linear filters that have simultaneous localization in space and in frequency, thus resembling Gabor functions and simple cell receptive fields. In this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). The norms of the projections on such independent feature subspaces then indicate the values of invariant features.


Neural Computation | 2001

Topographic Independent Component Analysis

Aapo Hyvärinen; Patrik O. Hoyer; Mika Inki

In ordinary independent component analysis, the components are assumed to be completely independent, and they do not necessarily have any meaningful order relationships. In practice, however, the estimated independent components are often not at all independent. We propose that this residual dependence structure could be used to define a topo-graphic order for the components. In particular, a distance between two components could be defined using their higher-order correlations, and this distance could be used to create a topographic representation. Thus, we obtain a linear decomposition into approximately independent components, where the dependence of two components is approximated by the proximity of the components in the topographic representation.


Neural Computation | 1999

Sparse code shrinkage: denoising of nongaussian data by maximum likelihood estimation

Aapo Hyvärinen

Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. In this article, we show how sparse coding can be used for denoising. Using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise, we show how to apply a soft-thresholding (shrinkage) operator on the components of sparse coding so as to reduce noise. Our method is closely related to the method of wavelet shrinkage, but it has the important benefit over wavelet methods that the representation is determined solely by the statistical properties of the data. The wavelet representation, on the other hand, relies heavily on certain mathematical properties (like self-similarity) that may be only weakly related to the properties of natural data.


Neural Networks | 1999

Nonlinear independent component analysis: existence and uniqueness results

Aapo Hyvärinen; Petteri Pajunen

The question of existence and uniqueness of solutions for nonlinear independent component analysis is addressed. It is shown that if the space of mixing functions is not limited there exists always an infinity of solutions. In particular, it is shown how to construct parameterized families of solutions. The indeterminacies involved are not trivial, as in the linear case. Next, it is shown how to utilize some results of complex analysis to obtain uniqueness of solutions. We show that for two dimensions, the solution is unique up to a rotation, if the mixing function is constrained to be a conformal mapping together with some other assumptions. We also conjecture that the solution is strictly unique except in some degenerate cases, as the indeterminacy implied by the rotation is essentially similar to estimating the model of linear ICA.

Collaboration


Dive into the Aapo Hyvärinen's collaboration.

Top Co-Authors

Avatar

Patrik O. Hoyer

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jarmo Hurri

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Erkki Oja

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kun Zhang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hiroaki Sasaki

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge