Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Földiák is active.

Publication


Featured researches published by Peter Földiák.


Neural Computation | 1991

Learning invariance from transformation sequences

Peter Földiák

The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.


Journal of Cognitive Neuroscience | 2001

The Speed of Sight

Christian Keysers; Dengke Xiao; Peter Földiák; David I. Perrett

Macaque monkeys were presented with continuous rapid serial visual presentation (RSVP) sequences of unrelated naturalistic images at rates of 14-222 msec/image, while neurons that responded selectively to complex patterns (e.g., faces) were recorded in temporal cortex. Stimulus selectivity was preserved for 65 of these neurons even at surprisingly fast presentation rates (14 msec/image or 72 images/sec). Five human subjects were asked to detect or remember images under equivalent conditions. Their performance in both tasks was above chance at all rates (14-111 msec/image). The performance of single neurons was comparable to that of humans and responded in a similar way to changes in presentation rate. The implications for the role of temporal cortex cells in perception are discussed.


Biological Cybernetics | 1990

Forming sparse representations by local anti-Hebbian learning

Peter Földiák

How does the brain form a useful representation of its environment? It is shown here that a layer of simple Hebbian units connected by modifiable anti-Hebbian feed-back connections can learn to code a set of patterns in such a way that statistical dependency between the elements of the representation is reduced, while information is preserved. The resulting code is sparse, which is favourable if it is to be used as input to a subsequent supervised associative layer. The operation of the network is demonstrated on two simple problems.


Archive | 1993

The ‘Ideal Homunculus’: Statistical Inference from Neural Population Responses

Peter Földiák

What does the response of a neuron, or of a group of neurons mean? What does it say about the stimulus? How distributed and efficient is the encoding of information in population responses? It is suggested here that Bayesian statistical inference can help answer these questions by allowing us to ‘read the neural code’ not only in the time domain[2, 5] but also across a population of neurons. Based on repeated recordings of neural responses to a known set of stimuli, we can estimate the conditional probability distribution of the responses given the stimulus, P(response|stimulus). The behaviourally relevant distribution, i.e. the conditional probability distribution of the stimuli given an observed response from a cell or a group of cells, P(stimulus|response) can be derived using the Bayes rule. This distribution contains all the information present in the response about the stimulus, and gives an upper limit and a useful comparison to the performance of further neural processing stages receiving input from these neurons. As the notion of an ‘ideal observer’ makes the definition of psychophysical efficiency possible[1], this ‘ideal homunculus’ (looking at the neural response instead of the stimulus) can be used to test the efficiency of neural representation. The Bayes rule is: P(s|r) = P(r|s)P(s)/P(r) = P(r|s)P(s)/Σ s P(r|s)P(s), where in this case s stands for stimulus, r for response, and S is the set of possible stimuli.


Cognitive Neuropsychology | 2005

Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus

Christian Keysers; Dengke Xiao; Peter Földiák; David I. Perrett

Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented. The first two authors contributed equally.


Progress in Brain Research | 2004

Rapid serial visual presentation for the determination of n eural selectivity in area STSa

Peter Földiák; Dengke Xiao; Christian Keysers; Robin Edwards; David I. Perrett

We show that rapid serial visual presentation (RSVP) in combination with a progressive reduction of the stimulus set is an efficient method for describing the selectivity properties of high-level cortical neurons in single-cell electrophysiological recording experiments. Rapid presentation allows the experimental testing of a significantly larger number of stimuli, which can reduce the subjectivity of the results due to stimulus selection and the lack of sufficient control stimuli. We prove the reliability of the rapid presentation and stimulus reduction methods by repeated experiments and the comparison of different testing conditions. Our results from neurons in area STSa of the macaque temporal cortex provide a well-controlled confirmation for the existence of a population of cells that respond selectively to stimuli containing faces. View tuning properties measured using this method also confirmed earlier results. In addition, we found a population of cells that respond reliably to complex non-face stimuli, though their tuning properties are not obvious.


IEEE Transactions on Information Theory | 2005

Bayesian bin distribution inference and mutual information

Dominik Endres; Peter Földiák

We present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. We consider piecewise-constant distributions P(X) with uniform (second-order) prior over location of discontinuity points and assigned chances. The predictive distribution and the model complexity can be determined completely from the data in a computational time that is linear in the number of degrees of freedom and quadratic in the number of possible values of X. Furthermore, exact values of the expectations of entropies and their variances can be computed with polynomial effort. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is particularly useful in experimental research areas where the number of available samples is severely limited (e.g., neurophysiology). Estimates on a simulated data set provide more accurate results than using a previously proposed method.


Neurocomputing | 2001

Stimulus optimisation in primary visual cortex

Peter Földiák

Abstract A computational method is introduced for finding effective stimuli for sensory neurons. In single unit recording experiments, it maximises simultaneously recorded responses by changing stimuli along an estimated gradient with respect to stimuli. The estimate is the correlation between noise added to an evolving base stimulus and the response. Pixel optimisation for monkey V1 rapidly produces stimuli consistent with conventionally determined tuning even for complex cells. For the same complex cell, repeated runs of the optimisation gave solutions with different phase. Unlike reverse correlation, this method is applicable to non-linear, context-sensitive cells, possibly also in higher sensory areas.


Neurocomputing | 1996

Learning generalisation and localisation: Competition for stimulus type and receptive field

Mike W. Oram; Peter Földiák

The evidence from neurophysiological recordings from the primate visual system suggests that sensory patterns are processed using units arranged in a hierarchical multi-layered network. Responses of these units show progressively increasing receptive field size combined with selectivity for increasing stimulus complexity at successively higher levels. It is argued that the rate of the increase in receptive field size is less than the maximum possible given the initial spread of neuronal projections that occurs during development. We show here that a competitive learning mechanism using a ‘trace-Hebbian’ learning rule [14] with a larger number of competing output units learns not only positional invariance for a given input feature but can also establish restricted receptive field sizes (i.e. less than the maximum size given the initial connections). Importantly the same stimulus selectivity was maintained throughout the receptive field. It is shown that this is accompanied by a relative increase in the spatial evenness of the representation of each detector type across position within the input array. The network properties were found to be robust and stable over a wide range of learning parameters. We suggest that such a competitive mechanism may help account for the reported properties of cells in the ventral stream of the primate visual system.


Annals of Mathematics and Artificial Intelligence | 2009

An application of formal concept analysis to semantic neural decoding

Dominik Endres; Peter Földiák; Uta Priss

This paper proposes a novel application of Formal Concept Analysis (FCA) to neural decoding: the semantic relationships between the neural representations of large sets of stimuli are explored using concept lattices. In particular, the effects of neural code sparsity are modelled using the lattices. An exact Bayesian approach is employed to construct the formal context needed by FCA. This method is explained using an example of neurophysiological data from the high-level visual cortical area STSa. Prominent features of the resulting concept lattices are discussed, including indications for hierarchical face representation and a product-of-experts code in real neurons. The robustness of these features is illustrated by studying the effects of scaling the attributes.

Collaboration


Dive into the Peter Földiák's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dengke Xiao

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar

Mike W. Oram

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robin Edwards

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar

Uta Priss

Edinburgh Napier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge