Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew C. Casey is active.

Publication


Featured researches published by Matthew C. Casey.


Connection Science | 2002

Connectionist simulation of quantification skills

Khurshid Ahmad; Matthew C. Casey; Tracey Ann Bale

The study of numerical abilities, and how they are acquired, is being used to explore the continuity between ontogenesis and environmental learning. One technique that proves useful in this exploration is the artificial simulation of numerical abilities with neural networks, using different learning paradigms to explore development. A neural network simulation of subitization, sometimes referred to as visual enumeration, and of counting, a recurrent operation, has been developed using the so-called multi-net architecture. Our numerical ability simulations use two or more neural networks combining supervised and unsupervised learning techniques to model subitization and counting. Subitization has been simulated using networks employing unsupervised self-organizing learning, the results of which agree with infant subitization experiments and are comparable with supervised neural network simulations of subitization reported in the literature. Counting has been simulated using a multi-net system of supervised static and recurrent backpropagation networks that learn their individual tasks within an unsupervised, competitive framework. The developmental profile of the counting simulation shows similarities to that of children learning to count and demonstrates how neural networks can learn how to be combined together in a process modelling development.


Neurocomputing | 2008

A theoretical framework for multiple neural network systems

Michael W. Shields; Matthew C. Casey

Multiple neural network systems have become popular techniques for tackling complex tasks, often giving improved performance compared to single network systems. For example, modular systems can provide improvements in generalisation through task decomposition, whereas multiple classifier and regressor systems typically improve generalisation through the ensemble combination of redundant networks. Whilst there has been significant focus on understanding the theoretical properties of some of these multi-net systems, particularly ensemble systems, there has been little theoretical work on understanding the properties of the generic combination of networks, important in developing more complex systems, perhaps even those a step closer to their biological counterparts. In this article, we provide a formal framework in which the generic combination of neural networks can be described, and in which the properties of the system can be rigorously analysed. We achieve this by describing multi-net systems in terms of partially ordered sets and state transition systems. By way of example, we explore an abstract version of learning applied to a generic multi-net system that can combine an arbitrary number of networks in sequence and in parallel. By using the framework we show with a constructive proof that, under specific conditions, if it is possible to train the generic system, then training can be achieved by the abstract technique described.


international symposium on neural networks | 2010

Simulating the effects of cortical feedback in the superior colliculus with topographic maps

Athanasios Pavlou; Matthew C. Casey

The superior colliculus (SC) is a neural structure found in mammalian brains that acts as a sensory hub through which visual, auditory and somatosensory inputs are integrated. This integration is used to orient the eyes fovea towards a prominent stimulus, independently of which sensory modality it was detected in. A recently observed aspect of this integration is that it is moderated by cortical feedback. As a key sensorimotor function integrating low-level sensory information moderated by the cortex, studying the SC may therefore enable us to understand how natural systems prioritize sensory computation in real-time, possibly as a result of task dependent feedback. In this paper, we focus on such a biological model. From a computational perspective, understanding this combination of bottom-up processing with top-down moderation in a model is therefore appealing. We present for the first time a behavioral model of the SC which combines the development of unisensory and multisensory representations with simulated cortical feedback. Our model demonstrates how unisensory maps can be aligned and integrated automatically into a multisensory representation. Results demonstrate that our model can capture the basic properties of the SC, and in particular they show the influence of the simulated cortical feedback on multisensory responses, reproducing the observed multisensory enhancement and suppression phenomena compared to biological studies. This suggests that our unified competitive learning approach may successfully be used to represent spatial processing that is moderated by task, and hence could be more widely applied to other, task dependent processing.


international conference on advances in pattern recognition | 2005

Configuration of neural networks for the analysis of seasonal time series

Tugba Taskaya-Temizel; Matthew C. Casey

Time series often exhibit periodical patterns that can be analysed by conventional statistical techniques. These techniques rely upon an appropriate choice of model parameters that are often difficult to determine. Whilst neural networks also require an appropriate parameter configuration, they offer a way in which non-linear patterns may be modelled. However, evidence from a limited number of experiments has been used to argue that periodical patterns cannot be modelled using such networks. In this paper, we present a method to overcome the perceived limitations of this approach by determining the configuration parameters of a time delayed neural network from the seasonal data it is being used to model. Our method uses a fast Fourier transform to calculate the number of input tapped delays, with results demonstrating improved performance as compared to that of other linear and hybrid seasonal modelling techniques.


international symposium on neural networks | 2008

A behavioral model of sensory alignment in the superficial and deep layers of the superior colliculus

Matthew C. Casey; Athanasios Pavlou

The ability to combine sensory information is an important attribute of the brain. Multisensory integration in natural systems suggests that a similar approach in artificial systems may be important. Multisensory integration is exemplified in mammals by the superior colliculus (SC), which combines visual, auditory and somatosensory stimuli to shift gaze. However, although we have a good understanding of the overall architecture of the SC, as yet we do not fully understand the process of integration. While a number of computational models of the SC have been developed, there has not been a larger scale implementation that can help determine how the senses are aligned and integrated across the superficial and deep layers of the SC. In this paper we describe a prototype implementation of the mammalian SC consisting of self-organizing maps linked by Hebbian connections, modeling visual and auditory processing in the superficial and deep layers. The model is trained on artificial auditory and visual stimuli, with testing demonstrating the formation of appropriate spatial representations, which compare well with biological data. Subsequently, we train the model on multisensory stimuli, testing to see if the unisensory maps can be combined. The results show the successful alignment of sensory maps to form a multisensory representation. We conclude that, while simple, the model lends itself to further exploration of integration, which may give insight into whether such modeling is of benefit computationally.


international conference on multiple classifier systems | 2003

Combining multiple modes of information using unsupervised neural classifiers

Khurshid Ahmad; Matthew C. Casey; Bogdan Vrusias; Panagiotis Saragiotis

A modular neural network-based system is presented where the component networks learn together to classify a set of complex input patterns. Each pattern comprises two vectors: a primary vector and a collateral vector. Examples of such patterns include annotated images and magnitudes with articulated numerical labels. Our modular system is trained using an unsupervised learning algorithm. One component learns to classify the patterns using the primary vectors and another classifies the same patterns using the collateral vectors. The third combiner network correlates the primary with the collateral. The primary and collateral vectors are mapped on a Kohonen self-organising feature map (SOM), with the combiner based on a variant of Hebbian networks. The classification results appear encouraging in our attempts to classify a set of scene-of-crime images and in our attempts to investigate how pre-school infants relate magnitude to articulated numerical quantities. Certain features of SOMs, namely the topological neighbourhoods of specific nodes, allow for one to many mappings between the primary and collateral maps, hence establishing a broader association between the two vectors when compared with the association due to synchrony in a conventional Hebbian association.


international conference on neural information processing | 2008

Identifying emotions using topographic conditioning maps

Athanasios Pavlou; Matthew C. Casey

The amygdala is the neural structure that acts as an evaluator of potentially threatening stimuli. We present a biologically plausible model of the visual fear conditioning pathways leading to the amygdala, using a topographic conditioning map (TCM). To evaluate the model, we first use abstract stimuli to understand its ability to form topographic representations, and subsequently to condition on arbitrary stimuli.We then present results on facial emotion recognition using the sub-cortical pathway of the model. Compared to other emotion classification approaches, ourmodel performswell, but does not have the need to pre-specify features. This generic ability to organise visual stimuli is enhanced through conditioning, which also improves classification performance. Our approach demonstrates that a biologically motivated model can be applied to real-world tasks, while allowing us to explore biological hypotheses.


international conference on artificial neural networks | 2012

Simulating light adaptation in the retina with rod-cone coupling

Kendi Muchungi; Matthew C. Casey

The retina performs various key operations on incoming images in order to facilitate higher-level visual processing. Since the retina outperforms existing image enhancing techniques, it follows that computational simulations with biological plausibility are best suited to inform their design and development, as well as help us better understand retina functionality. Recently, it has been determined that quality of vision is dependant on the interaction between rod and cone pathways, traditionally thought to be wholly autonomous. This interaction improves the signal-to-noise ratio (SNR) within the retina and in turn enhances boundary detection by cones. In this paper we therefore propose the first cone simulator that incorporates input from rods. Our results show that rod-cone convergence does improve SNR, therefore allowing for improved contrast sensitivity, and consequently visual perception.


international conference on artificial neural networks | 2012

Evaluating the effect of spiking network parameters on polychronization

Panagiotis Ioannou; Matthew C. Casey; André Grüning

Spiking neural networks (SNNs) are considered to be more biologically realistic compared to typical rate-coded networks as they can model closely different types of neurons and their temporal dynamics. Typical spiking models use a number of fixed parameters such as the ratio between excitatory and inhibitory neurons. However, the parameters that are used in these models focus almost exclusively on our understanding of the neocortex with, for example, 80% of neurons chosen as excitatory and 20% inhibitory. In this paper we will evaluate how varying the ratio of excitatory versus inhibitory neurons, axonal conduction delays and the number of synaptic connections affect a SNN model by observing the change in mean firing rate and polychronization. Our main focus is to examine the effect on the emergence of spatiotemporal time-locked patterns, known as polychronous groups (PNGs). We show that the number of PNGs varies dramatically with a changing proportion of inhibitory neurons, that they increase exponentially as the number of synaptic connections is increased and that they decrease as the maximum axonal delays in the network increases. Our findings show that if we are to use SNNs and PNGs to model cognitive functions we must take into account these critical parameters.


Neural Networks | 2012

Modeling learned categorical perception in human vision

Matthew C. Casey; Paul T. Sowden

A long standing debate in cognitive neuroscience has been the extent to which perceptual processing is influenced by prior knowledge and experience with a task. A converging body of evidence now supports the view that a task does influence perceptual processing, leaving us with the challenge of understanding the locus of, and mechanisms underpinning, these influences. An exemplar of this influence is learned categorical perception (CP), in which there is superior perceptual discrimination of stimuli that are placed in different categories. Psychophysical experiments on humans have attempted to determine whether early cortical stages of visual analysis change as a result of learning a categorization task. However, while some results indicate that changes in visual analysis occur, the extent to which earlier stages of processing are changed is still unclear. To explore this issue, we develop a biologically motivated neural model of hierarchical vision processes consisting of a number of interconnected modules representing key stages of visual analysis, with each module learning to exhibit desired local properties through competition. With this system level model, we evaluate whether a CP effect can be generated with task influence to only the later stages of visual analysis. Our model demonstrates that task learning in just the later stages is sufficient for the model to exhibit the CP effect, demonstrating the existence of a mechanism that requires only a high-level of task influence. However, the effect generalizes more widely than is found with human participants, suggesting that changes to earlier stages of analysis may also be involved in the human CP effect, even if these are not fundamental to the development of CP. The model prompts a hybrid account of task-based influences on perception that involves both modifications to the use of the outputs from early perceptual analysis along with the possibility of changes to the nature of that early analysis itself.

Collaboration


Dive into the Matthew C. Casey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge