Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teuvo Kohonen is active.

Publication


Featured researches published by Teuvo Kohonen.


Neurocomputing: foundations of research | 1988

Self-organized formation of topologically correct feature maps

Teuvo Kohonen

This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.


Neural Networks | 1988

An introduction to neural computing

Teuvo Kohonen

Abstract This article contains a brief survey of the motivations, fundamentals, and applications of artificial neural networks, as well as some detailed analytical expressions for their theory.


IEEE Transactions on Neural Networks | 2000

Self organization of a massive document collection

Teuvo Kohonen; Samuel Kaski; Krista Lagus; Jarkko Salojärvi; Jukka Honkela; Antti Saarela

This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the self-organizing map (SOM) algorithm. As the feature vectors for the documents statistical representations of their vocabularies are used. The main goal in our work has been to scale up the SOM algorithm to be able to deal with large amounts of high-dimensional data. In a practical experiment we mapped 6,840,568 patent abstracts onto a 1,002,240-node SOM. As the feature vectors we used 500-dimensional vectors of stochastic figures obtained as random projections of weighted word histograms.


Neurocomputing | 1998

The self-organizing map

Teuvo Kohonen

Abstract An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.


IEEE Transactions on Computers | 1972

Correlation Matrix Memories

Teuvo Kohonen

A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of Mxq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=Mxqq(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.


Proceedings of the IEEE | 1996

Engineering applications of the self-organizing map

Teuvo Kohonen; Erkki Oja; Olli Simula; Ari Visa; Jari Kangas

The self-organizing map (SOM) method is a new, powerful software tool for the visualization of high-dimensional data. It converts complex, nonlinear statistical relationships between high-dimensional data into simple geometric relationships on a low-dimensional display. As it thereby compresses information while preserving the most important topological and metric relationships of the primary data elements on the display, it may also be thought to produce some kind of abstractions. The term self-organizing map signifies a class of mappings defined by error-theoretic considerations. In practice they result in certain unsupervised, competitive learning processes, computed by simple-looking SOM algorithms. Many industries have found the SOM-based software tools useful. The most important property of the SOM, orderliness of the input-output mapping, can be utilized for many tasks: reduction of the amount of training data, speeding up learning nonlinear interpolation and extrapolation, generalization, and effective compression of information for its transmission.


IEEE Computer | 1988

The 'neural' phonetic typewriter

Teuvo Kohonen

The factors that make speech recognition difficult are examined, and the potential of neural computers for this purpose is discussed. A speaker-adaptive system that transcribes dictation using an unlimited vocabulary is presented that is based on a neural network processor for the recognition of phonetic units of speech. The acoustic preprocessing, vector quantization, neural network model, and shortcut learning algorithm used are described. The utilization of phonotopic maps and of postprocessing in symbolic forms are discussed. Hardware implementations and performance of the neural networks are considered.<<ETX>>


Biological Cybernetics | 1989

Self-organizing semantic maps

Helge Ritter; Teuvo Kohonen

Self-organized formation of topographic maps for abstract data, such as words, is demonstrated in this work. The semantic relationships in the data are reflected by their relative distances in the map. Two different simulations, both based on a neural network model that implements the algorithm of the selforganizing feature maps, are given. For both, an essential, new ingredient is the inclusion of the contexts, in which each symbol appears, into the input data. This enables the network to detect the “logical similarity” between words from the statistics of their contexts. In the first demonstration, the context simply consists of a set of attribute values that occur in conjunction with the words. In the second demonstration, the context is defined by the sequences in which the words occur, without consideration of any associated attributes. Simple verbal statements consisting of nouns, verbs, and adverbs have been analyzed in this way. Such phrases or clauses involve some of the abstractions that appear in thinking, namely, the most common categories, into which the words are then automatically grouped in both of our simulations. We also argue that a similar process may be at work in the brain.


Neurocomputing | 1998

WEBSOM – Self-organizing maps of document collections

Samuel Kaski; Timo Honkela; Krista Lagus; Teuvo Kohonen

Abstract With the WEBSOM method a textual document collection may be organized onto a graphical map display that provides an overview of the collection and facilitates interactive browsing. Interesting documents can be located on the map using a content-directed search. Each document is encoded as a histogram of word categories which are formed by the self-organizing map (SOM) algorithm based on the similarities in the contexts of the words. The encoded documents are organized on another self-organizing map, a document map, on which nearby locations contain similar documents. Special consideration is given to the computation of very large document maps which is possible with general-purpose computers if the dimensionality of the word category histograms is first reduced with a random mapping method and if computationally efficient algorithms are used in computing the SOMs.


international symposium on neural networks | 1990

Improved versions of learning vector quantization

Teuvo Kohonen

The author introduces a variant of (supervised) learning vector quantization (LVQ) and discusses practical problems associated with the application of the algorithms. The LVQ algorithms work explicitly in the input domain of the primary observation vectors, and their purpose is to approximate the theoretical Bayes decision borders using piecewise linear decision surfaces. This is done by purported optimal placement of the class codebook vectors in signal space. As the classification decision is based on the nearest-neighbor selection among the codebook vectors, its computation is very fast. It has turned out that the differences between the presented algorithms in regard to the remaining discretization error are not significant, and thus the choice of the algorithm may be based on secondary arguments, such as stability in learning, in which respect the variant introduced (LVQ2.1) seems to be superior to the others. A comparative study of several methods applied to speech recognition is included

Collaboration


Dive into the Teuvo Kohonen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kari Torkkola

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jari Kangas

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Olli Simula

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Panu Somervuo

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Erkki Reuhkala

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kimmo Raivio

Helsinki University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge