Nigel G. Stocks
University of Warwick
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nigel G. Stocks.
Il Nuovo Cimento D | 1995
Mark Dykman; D. G. Luchinsky; Riccardo Mannella; Peter V. E. McClintock; N. D. Stein; Nigel G. Stocks
SummaryWe outline the historical development of stochastic resonance (SR), a phenomenon in which the signal and/or the signal-to-noise ratio in a nonlinear system increase with increasing intensity of noise. We discuss basic theoretical ideas explaining and describing SR, and we review some revealing experimental data that place SR within the wider context of statistical physics. We emphasize the close relationship of SR to some effects that are well known in condensed-matter physics.
Fluctuation and Noise Letters | 2002
Nigel G. Stocks; David Allingham; Robert Morse
In this paper we explore the possibility of using a recently discovered form of stochastic resonance - termed suprathreshold stochastic resonance - to improve speech comprehension in patients fitted with cochlear implants. A leaky-integrate-and-fire (LIF) neurone is used to model cochlear nerve activity when subject to electrical stimulation. This model, in principle, captures key aspects of temporal coding in analogue cochlear implants. Estimates for the information transmitted by a population of nerve fibres is obtained as a function of internal (neuronal) noise level. We conclude that SSR does indeed provide a possible mechanism by which information transmission along the cochlear nerve can be improved - and thus may well lead to improved speech comprehension.
Journal of Physics A | 1993
Nigel G. Stocks; N. D. Stein; Peter V. E. McClintock
The first observations of noise-induced enhancements and phase shifts of a weak periodic signal-characteristics signatures of stochastic resonance (SR)-are reported for a monostable system. The results are shown to be in good agreement with a theoretical description based on linear-response theory and the fluctuation dissipation theorem. It is argued that SR is a general phenomenon that can in principle occur for any underdamped nonlinear oscillator.
IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1999
D. G. Luchinsky; Riccardo Mannella; Peter V. E. McClintock; Nigel G. Stocks
For pt.I see ibid., vol.46, no.9, pp.1205-14 (1999). Stochastic resonance (SR), in which a periodic signal in a nonlinear system can be amplified by added noise, is discussed. The application of circuit modeling techniques to the conventional form of SR, which occurs in static bistable potentials, was considered in a companion paper. Here, the investigation of nonconventional forms of SR in part using similar electronic techniques is described. In the small-signal limit, the results are well described in terms of linear response theory. Some other phenomena of topical interest, closely related to SR, are also treated.
Artificial Intelligence in Medicine | 2012
Jianhua Yang; Harsimrat Singh; Evor L. Hines; Friederike Schlaghecken; Daciana Iliescu; Mark S. Leeson; Nigel G. Stocks
OBJECTIVE An electroencephalogram-based (EEG-based) brain-computer-interface (BCI) provides a new communication channel between the human brain and a computer. Amongst the various available techniques, artificial neural networks (ANNs) are well established in BCI research and have numerous successful applications. However, one of the drawbacks of conventional ANNs is the lack of an explicit input optimization mechanism. In addition, results of ANN learning are usually not easily interpretable. In this paper, we have applied an ANN-based method, the genetic neural mathematic method (GNMM), to two EEG channel selection and classification problems, aiming to address the issues above. METHODS AND MATERIALS Pre-processing steps include: least-square (LS) approximation to determine the overall signal increase/decrease rate; locally weighted polynomial regression (Loess) and fast Fourier transform (FFT) to smooth the signals to determine the signal strength and variations. The GNMM method consists of three successive steps: (1) a genetic algorithm-based (GA-based) input selection process; (2) multi-layer perceptron-based (MLP-based) modelling; and (3) rule extraction based upon successful training. The fitness function used in the GA is the training error when an MLP is trained for a limited number of epochs. By averaging the appearance of a particular channel in the winning chromosome over several runs, we were able to minimize the error due to randomness and to obtain an energy distribution around the scalp. In the second step, a threshold was used to select a subset of channels to be fed into an MLP, which performed modelling with a large number of iterations, thus fine-tuning the input/output relationship. Upon successful training, neurons in the input layer are divided into four sub-spaces to produce if-then rules (step 3). Two datasets were used as case studies to perform three classifications. The first data were electrocorticography (ECoG) recordings that have been used in the BCI competition III. The data belonged to two categories, imagined movements of either a finger or the tongue. The data were recorded using an 8 × 8 ECoG platinum electrode grid at a sampling rate of 1000 Hz for a total of 378 trials. The second dataset consisted of a 32-channel, 256 Hz EEG recording of 960 trials where participants had to execute a left- or right-hand button-press in response to left- or right-pointing arrow stimuli. The data were used to classify correct/incorrect responses and left/right hand movements. RESULTS For the first dataset, 100 samples were reserved for testing, and those remaining were for training and validation with a ratio of 90%:10% using K-fold cross-validation. Using the top 10 channels selected by GNMM, we achieved a classification accuracy of 0.80 ± 0.04 for the testing dataset, which compares favourably with results reported in the literature. For the second case, we performed multi-time-windows pre-processing over a single trial. By selecting 6 channels out of 32, we were able to achieve a classification accuracy of about 0.86 for the response correctness classification and 0.82 for the actual responding hand classification, respectively. Furthermore, 139 regression rules were identified after training was completed. CONCLUSIONS We demonstrate that GNMM is able to perform effective channel selections/reductions, which not only reduces the difficulty of data collection, but also greatly improves the generalization of the classifier. An important step that affects the effectiveness of GNMM is the pre-processing method. In this paper, we also highlight the importance of choosing an appropriate time window position.
Physics Letters A | 2006
Mark D. McDonnell; Nigel G. Stocks; Charles E. M. Pearce; Derek Abbott
Mark D. McDonnell, ∗ Nigel G. Stocks, † Charles E.M. Pearce, ‡ and Derek Abbott § 1 School of Electrical and Electronic Engineering & Centre for Biomedical Engineering, The University of Adelaide, SA 5005, Australia School of Engineering, The University of Warwick, Coventry CV4 7AL, United Kingdom 3 School of Mathematical Sciences, The University of Adelaide, SA 5005, Australia (Dated: February 2, 2008)
Physics Letters A | 2001
Nigel G. Stocks
Abstract An exact analytical expression for the information transmitted through an array of threshold elements subject to a uniformly distributed signal and internal noise is presented. Additionally, approximations that accurately predict the optimal noise intensity and information gains achieved by the recently reported effect of suprathreshold stochastic resonance (N.G. Stocks, Phys. Rev. Lett. 84 (2000) 2310) are obtained.
Physical Review Letters | 2008
Mark D. McDonnell; Nigel G. Stocks
A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannons mutual information and Fisher information, and the optimality of Jeffreys prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.
Physical Review E | 2007
Mark D. McDonnell; Nigel G. Stocks; Derek Abbott
Suprathreshold stochastic resonance (SSR) is a form of noise-enhanced signal transmission that occurs in a parallel array of independently noisy identical threshold nonlinearities, including model neurons. Unlike most forms of stochastic resonance, the output response to suprathreshold random input signals of arbitrary magnitude is improved by the presence of even small amounts of noise. In this paper, the information transmission performance of SSR in the limit of a large array size is considered. Using a relationship between Shannons mutual information and Fisher information, a sufficient condition for optimality, i.e., channel capacity, is derived. It is shown that capacity is achieved when the signal distribution is Jeffreys prior, as formed from the noise distribution, or when the noise distribution depends on the signal distribution via a cosine relationship. These results provide theoretical verification and justification for previous work in both computational neuroscience and electronics.
Fluctuation and Noise Letters | 2005
Mark D. McDonnell; Nigel G. Stocks; Charles E. M. Pearce; Derek Abbott
Signal quantization in the presence of independent, identically distributed, large amplitude threshold noise is examined. It has previously been shown that when all quantization thresholds are set to the same value, this situation exhibits a form of stochastic resonance known as suprathreshold stochastic resonance. This means the optimal quantizer performance occurs for a small input signal-to-noise ratio. Here we examine the performance of this stochastic quantization in terms of both mutual information and mean square error distortion. It is also shown that for low input signal-to-noise ratios that the case of all thresholds being identical provides the optimal mean square error distortion performance for the given noise conditions.