Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Rehn is active.

Publication


Featured researches published by Martin Rehn.


Journal of Computational Neuroscience | 2007

A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields

Martin Rehn; Friedrich T. Sommer

Computational models of primary visual cortex have demonstrated that principles of efficient coding and neuronal sparseness can explain the emergence of neurones with localised oriented receptive fields. Yet, existing models have failed to predict the diverse shapes of receptive fields that occur in nature. The existing models used a particular “soft” form of sparseness that limits average neuronal activity. Here we study models of efficient coding in a broader context by comparing soft and “hard” forms of neuronal sparseness.As a result of our analyses, we propose a novel network model for visual cortex. The model forms efficient visual representations in which the number of active neurones, rather than mean neuronal activity, is limited. This form of hard sparseness also economises cortical resources like synaptic memory and metabolic energy. Furthermore, our model accurately predicts the distribution of receptive field shapes found in the primary visual cortex of cat and monkey.


Ibm Journal of Research and Development | 2008

Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer

Mikael Djurfeldt; Mikael Lundqvist; Christopher Johansson; Martin Rehn; Örjan Ekeberg; Anders Lansner

Biologically detailed large-scale models of the brain can now be simulated thanks to increasingly powerful massively parallel supercomputers. We present an overview, for the general technical reader, of a neuronal network model of layers II/III of the neocortex built with biophysical model neurons. These simulations, carried out on an IBM Blue Gene/L™ supercomputer, comprise up to 22 million neurons and 11 billion synapses, which makes them the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on single-processor as well as massively parallel machines. Performance measurements show good scaling behavior on the Blue Gene/L supercomputer up to 8,192 processors. Several key phenomena seen in the living brain appear as emergent phenomena in the simulations. We discuss the role of this kind of model in neuroscience and note that full-scale models may be necessary to preserve natural dynamics. We also discuss the need for software tools for the specification of models as well as for analysis and visualization of output data. Combining models that range from abstract connectionist type to biophysically detailed will help us unravel the basic principles underlying neocortical function.


IEEE Network | 2006

Attractor dynamics in a modular network model of neocortex

Mikael Lundqvist; Martin Rehn; Mikael Djurfeldt; Anders Lansner

Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.


multimedia information retrieval | 2008

Large-scale content-based audio retrieval from text queries

Gal Chechik; Eugene Ie; Martin Rehn; Samy Bengio; Dick Lyon

In content-based audio retrieval, the goal is to find sound recordings (audio documents) based on their acoustic features. This content-based approach differs from retrieval approaches that index media files using metadata such as file names and user tags. In this paper, we propose a machine learning approach for retrieving sounds that is novel in that it (1) uses free-form text queries rather sound sample based queries, (2) searches by audio content rather than via textual meta data, and (3) can scale to very large number of audio documents and very rich query vocabulary. We handle generic sounds, including a wide variety of sound effects, animal vocalizations and natural scenes. We test a scalable approach based on a passive-aggressive model for image retrieval (PAMIR), and compare it to two state-of-the-art approaches; Gaussian mixture models (GMM) and support vector machines (SVM). We test our approach on two large real-world datasets: a collection of short sound effects, and a noisier and larger collection of user-contributed user-labeled recordings (25K files, 2000 terms vocabulary). We find that all three methods achieved very good retrieval performance. For instance, a positive document is retrieved in the first position of the ranking more than half the time, and on average there are more than 4 positive documents in the first 10 retrieved, for both datasets. PAMIR completed both training and retrieval of all data in less than 6 hours for both datasets, on a single machine. It was one to three orders of magnitude faster than the competing approaches. This approach should therefore scale to much larger datasets in the future.


Neural Computation | 2010

Sound retrieval and ranking using sparse auditory representations

Richard F. Lyon; Martin Rehn; Samy Bengio; Thomas C. Walters; Gal Chechik

To create systems that understand the sounds that humans are exposed to in everyday life, we need to represent sounds with features that can discriminate among many different sound classes. Here, we use a sound-ranking framework to quantitatively evaluate such representations in a large-scale task. We have adapted a machine-vision method, the passive-aggressive model for image retrieval (PAMIR), which efficiently learns a linear mapping from a very large sparse feature space to a large query-term space. Using this approach, we compare different auditory front ends and different ways of extracting sparse features from high-dimensional auditory images. We tested auditory models that use an adaptive polezero filter cascade (PZFC) auditory filter bank and sparse-code feature extraction from stabilized auditory images with multiple vector quantizers. In addition to auditory image models, we compare a family of more conventional mel-frequency cepstral coefficient (MFCC) front ends. The experimental results show a significant advantage for the auditory models over vector-quantized MFCCs. When thousands of sound files with a query vocabulary of thousands of words were ranked, the best precision at top-1 was 73 and the average precision was 35, reflecting a 18 improvement over the best competing MFCC front end.


Neurocomputing | 2006

Attractor neural networks with patchy connectivity

Christopher Johansson; Martin Rehn; Anders Lansner

The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, we use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.


Neurocomputing | 2006

Storing and restoring visual input with collaborative rank coding and associative memory

Martin Rehn; Friedrich T. Sommer

Associative memory in cortical circuits has been held as a major mechanism for content-addressable memory. Hebbian synapses implement associative memory efficiently when storing sparse binary activity patterns. However, in models of sensory processing, representations are graded and not binary. Thus, it has been an unresolved question how sensory computation could exploit cortical associative memory. Here we propose a way how sensory processing could benefit from memory in cortical circuitry. We describe a new collaborative method of rank coding for converting graded stimuli, such as natural images, into sequences of synchronous spike volleys. Such sequences of sparse binary patterns can be efficiently processed in associative memory of the Willshaw type. We evaluate storage capacity and noise tolerance of the proposed system and demonstrate its use in cleanup and fill-in for noisy or occluded visual input.


Neurocomputing | 2004

Sequence memory with dynamical synapses

Martin Rehn; Anders Lansner

We present an attractor model of cortical memory, capable of sequence learning. The network incorporates a dynamical synapse model and is trained using a Hebbian learning rule that operates by redi ...


Neurocomputing | 2006

Attractor dynamics in a modular network model of the cerebral cortex

Mikael Lundqvist; Martin Rehn; Anders Lansner

Computational models of cortical associative memory often take a top-down approach. We have previously described such an abstract model with a hypercolumnar structure. Here we explore a similar, biophysically detailed but subsampled network model of neocortex. We study how the neurodynamics and associative memory properties of this biophysical model relate to the abstract model as well as to experimental data. The resulting network exhibits attractor dynamics; pattern completion and pattern rivalry. It reproduces several features of experimentally observed local UP states, as well as oscillatory behavior on the gamma and theta time scales observed in the cerebral cortex.


Neurocomputing | 2007

Tonically driven and self-sustaining activity in the lamprey hemicord: When can they co-exist?

Mikael Huss; Martin Rehn

In lamprey hemisegmental preparations, two types of rhythmic activity are found: slower tonically driven activity which varies according to the external drive, and faster, more stereotypic activity that arises after a transient electrical stimulus. We present a simple conceptual model where a bistable excitable system can exhibit the two states. We then show that a neuronal network model can display the desired characteristics, given that synaptic dynamics-facilitation and saturation-are included. The model behaviour and its dependence on key parameters are illustrated. We discuss the relevance of our model to the lamprey locomotor system.

Collaboration


Dive into the Martin Rehn's collaboration.

Top Co-Authors

Avatar

Anders Lansner

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mikael Lundqvist

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Johansson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mikael Djurfeldt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Örjan Ekeberg

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge