Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark D. McDonnell is active.

Publication


Featured researches published by Mark D. McDonnell.


PLOS Computational Biology | 2009

What Is Stochastic Resonance? Definitions, Misconceptions, Debates, and Its Relevance to Biology

Mark D. McDonnell; Derek Abbott

Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations—e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being “suboptimal”. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significantly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the “neural code”. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neuroscientists and biologists, both computational and experimental, to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.


Environmental Modeling & Assessment | 2002

Mathematical methods for spatially cohesive reserve design

Mark D. McDonnell; Hugh P. Possingham; Ian R. Ball; Elizabeth A. Cousins

The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.


Physics Letters A | 2006

Optimal information transmission in nonlinear arrays through suprathreshold stochastic resonance

Mark D. McDonnell; Nigel G. Stocks; Charles E. M. Pearce; Derek Abbott

Mark D. McDonnell, ∗ Nigel G. Stocks, † Charles E.M. Pearce, ‡ and Derek Abbott § 1 School of Electrical and Electronic Engineering & Centre for Biomedical Engineering, The University of Adelaide, SA 5005, Australia School of Engineering, The University of Warwick, Coventry CV4 7AL, United Kingdom 3 School of Mathematical Sciences, The University of Adelaide, SA 5005, Australia (Dated: February 2, 2008)


Fluctuation and Noise Letters | 2002

A CHARACTERIZATION OF SUPRATHRESHOLD STOCHASTIC RESONANCE IN AN ARRAY OF COMPARATORS BY CORRELATION COEFFICIENT

Mark D. McDonnell; Derek Abbott; Charles E. M. Pearce

Suprathreshold Stochastic Resonance (SSR), as described recently by Stocks, is a new form of Stochastic Resonance (SR) which occurs in arrays of nonlinear elements subject to aperiodic input signals and noise. These array elements can be threshold devices or FitzHugh-Nagumo neuron models for example. The distinguishing feature of SSR is that the output measure of interest is not maximized simply for nonzero values of input noise, but is maximized for nonzero values of the input noise to signal intensity ratio, and the effect occurs for signals of arbitrary magnitude and not just subthreshold signals. The original papers described SSR in terms of information theory. Previous work on SR has used correlation based measures to quantify SR for aperiodic input signals. Here, we argue the validity of correlation based measures and derive exact expressions for the cross-correlation coefficient in the same system as the original work, and show that the SSR effect also occurs in this alternative measure. If the output signal is thought of as a digital estimate of the input signal, then the output noise can be considered simply as quantization noise. We therefore derive an expression for the output signal to quantization noise ratio, and show that SSR also occurs in this measure.


Physical Review Letters | 2008

Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations.

Mark D. McDonnell; Nigel G. Stocks

A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannons mutual information and Fisher information, and the optimality of Jeffreys prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.


Neurocomputing | 2016

Deep extreme learning machines

Migel D. Tissera; Mark D. McDonnell

We present a method for synthesising deep neural networks using Extreme Learning Machines (ELMs) as a stack of supervised autoencoders. We test the method using standard benchmark datasets for multi-class image classification (MNIST, CIFAR-10 and Google Streetview House Numbers (SVHN)), and show that the classification error rate can progressively improve with the inclusion of additional autoencoding ELM modules in a stack. Moreover, we found that the method can correctly classify up to 99.19% of MNIST test images, which surpasses the best error rates reported for standard 3-layer ELMs or previous deep ELM approaches when applied to MNIST. The approach simultaneously offers a significantly faster training algorithm to achieve its best performance (in the order of 5min on a four-core CPU for MNIST) relative to a single ELM with the same total number of hidden units as the deep ELM, hence offering the best of both worlds: lower error rates and fast implementation.


Frontiers in Computational Neuroscience | 2011

Methods for Generating Complex Networks with Selected Structural Properties for Simulations: A Review and Tutorial for Neuroscientists

Brenton J. Prettejohn; Matthew J. Berryman; Mark D. McDonnell

Many simulations of networks in computational neuroscience assume completely homogenous random networks of the Erdös–Rényi type, or regular networks, despite it being recognized for some time that anatomical brain networks are more complex in their connectivity and can, for example, exhibit the “scale-free” and “small-world” properties. We review the most well known algorithms for constructing networks with given non-homogeneous statistical properties and provide simple pseudo-code for reproducing such networks in software simulations. We also review some useful mathematical results and approximations associated with the statistics that describe these network models, including degree distribution, average path length, and clustering coefficient. We demonstrate how such results can be used as partial verification and validation of implementations. Finally, we discuss a sometimes overlooked modeling choice that can be crucially important for the properties of simulated networks: that of network directedness. The most well known network algorithms produce undirected networks, and we emphasize this point by highlighting how simple adaptations can instead produce directed networks.


PLOS ONE | 2015

Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

Mark D. McDonnell; Migel D. Tissera; Tony Vladusich; André van Schaik; Jonathan Tapson

Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.


Physical Review E | 2007

Optimal stimulus and noise distributions for information transmission via suprathreshold stochastic resonance

Mark D. McDonnell; Nigel G. Stocks; Derek Abbott

Suprathreshold stochastic resonance (SSR) is a form of noise-enhanced signal transmission that occurs in a parallel array of independently noisy identical threshold nonlinearities, including model neurons. Unlike most forms of stochastic resonance, the output response to suprathreshold random input signals of arbitrary magnitude is improved by the presence of even small amounts of noise. In this paper, the information transmission performance of SSR in the limit of a large array size is considered. Using a relationship between Shannons mutual information and Fisher information, a sufficient condition for optimality, i.e., channel capacity, is derived. It is shown that capacity is achieved when the signal distribution is Jeffreys prior, as formed from the noise distribution, or when the noise distribution depends on the signal distribution via a cosine relationship. These results provide theoretical verification and justification for previous work in both computational neuroscience and electronics.


digital image computing techniques and applications | 2016

Understanding Data Augmentation for Classification: When to Warp?

Sebastien Wong; Adam Gatt; Victor Stamatescu; Mark D. McDonnell

In this paper we investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier. Two approaches for creating additional training samples are data warping, which generates additional samples through transformations applied in the data-space, and synthetic over-sampling, which creates additional samples in feature-space. We experimentally evaluate the benefits of data augmentation for a convolutional backpropagation-trained neural network, a convolutional support vector machine and a convolutional extreme learning machine classifier, using the standard MNIST handwritten digit dataset. We found that while it is possible to perform generic augmentation in feature-space, if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.

Collaboration


Dive into the Mark D. McDonnell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lawrence M. Ward

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Xiao Gao

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Migel D. Tissera

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Pierre-Olivier Amblard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Tony Vladusich

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Ashutosh Mohan

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge