Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James V. Stone is active.

Publication


Featured researches published by James V. Stone.


Trends in Cognitive Sciences | 2002

Independent component analysis: an introduction

James V. Stone

Independent component analysis (ICA) is a method for automatically identifying the underlying factors in a given data set. This rapidly evolving technique is currently finding applications in analysis of biomedical signals (e.g. ERP, EEG, fMRI, optical imaging), and in models of visual receptive fields and separation of speech signals. This article illustrates these applications, and provides an informal introduction to ICA.


Proceedings of the Royal Society of London B: Biological Sciences | 2001

When is now? Perception of simultaneity

James V. Stone; Nicola M. Hunkin; John Porrill; R. Wood; V. Keeler; M. Beanland; M. Port; N.R. Porter

We address the following question: Is there a difference (D) between the amount of time for auditory and visual stimuli to be perceived? On each of 1000 trials, observers were presented with a light–sound pair, separated by a stimulus onset asynchrony (SOA) between–250 ms (sound first) and 250 ms. Observers indicated if the light–sound pair came on simultaneously by pressing one of two (yes or no) keys. The SOA most likely to yield affirmative responses was defined as the point of subjective simultaneity (PSS). PSS values were between–21 ms (i.e. sound 21ms before light) and 150 ms. Evidence is presented that each PSS is observer specific. In a second experiment, each observer was tested using two observerstimulus distances. The resultant PSS values are highly correlated (r = 0.954, p = 0.003) suggesting that each observers PSS is stable. PSS values were significantly affected by observer–stimulus distance, suggesting that observers do not take account of changes in distance on the resultant difference in arrival times of light and sound. The difference RTd in simple reaction time to single visual and auditory stimuli was also estimated; no evidence that RTd is observer specific or stable was found. The implications of these findings for the perception of multisensory stimuli are discussed.


Neural Computation | 2001

Blind Source Separation Using Temporal Predictability

James V. Stone

A measure of temporal predictability is defined and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix that maximizes a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalized eigenvalue problem; such problems have scaling characteristics of O (N3), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with supergaussian, sub-gaussian, and gaussian probability density functions and on mixtures of voices and music.


NeuroImage | 2002

Spatiotemporal Independent Component Analysis of Event-Related fMRI Data Using Skewed Probability Density Functions

James V. Stone; John Porrill; N.R. Porter; Iain D. Wilkinson

We introduce two independent component analysis (ICA) methods, spatiotemporal ICA (stICA) and skew-ICA, and demonstrate the utility of these methods in analyzing synthetic and event-related fMRI data. First, stICA simultaneously maximizes statistical independence over both time and space. This contrasts with conventional ICA methods, which maximize independence either over time only or over space only; these methods often yield physically improbable solutions. Second, skew-ICA is based on the assumption that images have skewed probability density functions (pdfs), an assumption consistent with spatially localized regions of activity. In contrast, conventional ICA is based on the physiologically unrealistic assumption that images have symmetric pdfs. We combine stICA and skew-ICA, to form skew-stICA, and use it to analyze synthetic data and data from an event-related, left-right visual hemifield fMRI experiment. Results obtained with skew-stICA are superior to those of principal component analysis, spatial ICA (sICA), temporal ICA, stICA, and skew-sICA. We argue that skew-stICA works because it is based on physically realistic assumptions and that the potential of ICA can only be realized if such prior knowledge is incorporated into ICA methods.


Vision Research | 1998

Object Recognition Using Spatiotemporal Signatures

James V. Stone

The sequence of images generated by motion between observer and object specifies a spatiotemporal signature for that object. Evidence is presented that such spatiotemporal signatures are used in object recognition. Subjects learned novel, three-dimensional, rotating objects from image sequences in a continuous recognition task. During learning, the temporal order of images of a given object was constant. During testing, the order of images in each sequence was reversed, relative to its order during learning. This image sequence reversal produced significant reaction time increases and recognition rate decreases. Results are interpreted in terms of object-specific spatiotemporal signatures.


Proceedings - Royal Society of London. Biological sciences | 2004

Recurrent cerebellar architecture solves the motor-error problem

John Porrill; Paul Dean; James V. Stone

Current views of cerebellar function have been heavily influenced by the models of Marr and Albus, who suggested that the climbing fibre input to the cerebellum acts as a teaching signal for motor learning. It is commonly assumed that this teaching signal must be motor error (the difference between actual and correct motor command), but this approach requires complex neural structures to estimate unobservable motor error from its observed sensory consequences. We have proposed elsewhere a recurrent decorrelation control architecture in which Marr–Albus models learn without requiring motor error. Here, we prove convergence for this architecture and demonstrate important advantages for the modular control of systems with multiple degrees of freedom. These results are illustrated by modelling adaptive plant compensation for the three–dimensional vestibular ocular reflex. This provides a functional role for recurrent cerebellar connectivity, which may be a generic anatomical feature of projections between regions of cerebral and cerebellar cortex.


Proceedings of the Royal Society of London B: Biological Sciences | 2002

Decorrelation control by the cerebellum achieves oculomotor plant compensation in simulated vestibulo-ocular reflex

Paul Dean; John Porrill; James V. Stone

We introduce decorrelation control as a candidate algorithm for the cerebellar microcircuit and demonstrate its utility for oculomotor plant compensation in a linear model of the vestibulo–ocular reflex (VOR). Using an adaptive–filter representation of cerebellar cortex and an anti–Hebbian learning rule, the algorithm learnt to compensate for the oculomotor plant by minimizing correlations between a predictor variable (eye–movement command) and a target variable (retinal slip), without requiring a motor–error signal. Because it also provides an estimate of the unpredicted component of the target variable, decorrelation control can simplify both motor coordination and sensory acquisition. It thus unifies motor and sensory cerebellar functions.


Neural Computation | 1996

Learning perceptually salient visual parameters using spatiotemporal smoothness constraints

James V. Stone

A model is presented for unsupervised learning of low level vision tasks, such as the extraction of surface depth. A key assumption is that perceptually salient visual parameters (e.g., surface depth) vary smoothly over time. This assumption is used to derive a learning rule that maximizes the long-term variance of each units outputs, whilst simultaneously minimizing its short-term variance. The length of the half-life associated with each of these variances is not critical to the success of the algorithm. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. This maximizes the information throughput with respect to low-frequency parameters implicit in the input sequence. The model is used to learn stereo disparity from temporal sequences of random-dot and gray-level stereograms containing synthetically generated subpixel disparities. The presence of temporal discontinuities in disparity does not prevent learning or generalization to previously unseen image sequences. The implications of this class of unsupervised methods for learning in perceptual systems are discussed.


Vision Research | 1999

Object recognition: view-specificity and motion-specificity.

James V. Stone

This paper describes an experiment to distinguish between two theories of human visual object recognition. According to the view-specificity hypothesis, object recognition is based on particular learned views, whereas the motion-specificity hypothesis states that object recognition depends on particular directed view-sequences. Both hypotheses imply a degree of view-bias (i.e. recognition of a given object is associated with a small number of views). Whereas the view-specificity hypothesis attributes this view-bias to a preference for particular views, the motion-specificity hypothesis attributes view-bias to a preference for particular directed view-sequences. Results presented here suggest that recognition of 3D rotating objects involves significant view-bias. This view-bias appears to be associated with an underlying bias for particular directed view-sequences, and not for particular views.


Network: Computation In Neural Systems | 1995

A learning rule for extracting spatio-temporal invariances

James V. Stone; Alistair J. Bray

The inputs to photoreceptors tend to change rapidly over time, whereas physical parameters (e.g. surface depth) underlying these changes vary more slowly. Accordingly, if a neuron codes for a physical parameter then its output should also change slowly, despite its rapidly fluctuating inputs. We demonstrate that a model neuron which adapts to make its output vary smoothly over time can learn to extract invariances implicit in its input. This learning consists of a linear combination of Hebbian and anti-Hebbian synaptic changes, operating simultaneously upon the same connection weights but at different time scales. This is shown to be sufficient for the unsupervised learning of simple spatio-temporal invariances.

Collaboration


Dive into the James V. Stone's collaboration.

Top Co-Authors

Avatar

John Porrill

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

N.R. Porter

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Dean

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Frisby

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Buckley

Royal Hallamshire Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge