Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erin Goddard is active.

Publication


Featured researches published by Erin Goddard.


Journal of Vision | 2010

Combination of subcortical color channels in human visual cortex.

Erin Goddard; Damien J. Mannion; J. S. McDonald; Samuel G. Solomon; Colin W. G. Clifford

Mechanisms of color vision in cortex have not been as well characterized as those in sub-cortical areas, particularly in humans. We used fMRI in conjunction with univariate and multivariate (pattern) analysis to test for the initial transformation of sub-cortical inputs by human visual cortex. Subjects viewed each of two patterns modulating in color between orange-cyan or lime-magenta. We tested for higher order cortical representations of color capable of discriminating these stimuli, which were designed so that they could not be distinguished by the postulated L-M and S-(L + M) sub-cortical opponent channels. We found differences both in the average response and in the pattern of activity evoked by these two types of stimuli, across a range of early visual areas. This result implies that sub-cortical chromatic channels are recombined early in cortical processing to form novel representations of color. Our results also suggest a cortical bias for lime-magenta over orange-cyan stimuli, when they are matched for cone contrast and the response they would elicit in the L-M and S-(L + M) opponent channels.


Journal of Vision | 2011

Color responsiveness argues against a dorsal component of human V4

Erin Goddard; Damien J. Mannion; J. S. McDonald; Samuel G. Solomon; Colin W. G. Clifford

The retinotopic organization, position, and functional responsiveness of some early visual cortical areas in human and non-human primates are consistent with their being homologous structures. The organization of other areas remains controversial. A critical debate concerns the potential human homologue of macaque area V4, an area very responsive to colored images: specifically, whether human V4 is divided between ventral and dorsal components, as in the macaque, or whether human V4 is confined to one ventral area. We used fMRI to define these areas retinotopically in human and to test the impact of image color on their responsivity. We found a robust preference for full-color movie segments over a luminance-matched achromatic version in ventral V4 but little or no preference in the vicinity of the putative dorsal counterpart. Contrary to previous reports that visual field coverage in the ventral part of V4 is deficient without the dorsal part, we found that coverage in ventral V4 extended to the lower vertical meridian, including the entire contralateral hemifield. Together these results provide evidence against a dorsal component of human V4. Instead, they are consistent with human V4 being a single, ventral region that is sensitive to the chromatic components of images.


Vision Research | 2008

Centre-surround effects on perceived orientation in complex images.

Erin Goddard; Colin W. G. Clifford; Samuel G. Solomon

Using the simultaneous tilt illusion [Gibson, J., & Radner, M. (1937). Adaptation, after-effect and contrast in the perception of tilted lines. Journal of Experimental Psychology, 12, 453-467], we investigate the perception of orientation in natural images and textures with similar statistical properties. We show that the illusion increases if observers judge the average orientation of textures rather than sinusoidal gratings. Furthermore, the illusion can be induced by surrounding textures with a broad range of orientations, even those without a clearly perceivable orientation. A robust illusion is induced by natural images, and is increased by randomising the phase spectra of those images. We present a simple model of orientation processing that can accommodate most of our observations.


NeuroImage | 2016

Representational dynamics of object recognition: Feedforward and feedback information flows

Erin Goddard; Thomas A. Carlson; Nadene Dermody; Alexandra Woolgar

Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception.


Journal of Vision | 2010

Orientation-selective chromatic mechanisms in human visual cortex.

Damien J. Mannion; Erin Goddard; Colin W. G. Clifford

We used functional magnetic resonance imaging (fMRI) at 3T in human participants to trace the chromatic selectivity of orientation processing through functionally defined regions of visual cortex. Our aim was to identify mechanisms that respond to chromatically defined orientation and to establish whether they are tuned specifically to color or operate in an essentially cue-invariant manner. Using an annular test region surrounded inside and out by an inducing stimulus, we found evidence of sensitivity to orientation defined by red-green (L-M) or blue-yellow (S-cone isolating) chromatic modulations across retinotopic visual cortex and of joint selectivity for color and orientation. The likely mechanisms underlying this selectivity are discussed in terms of orientation-specific lateral interactions and spatial summation within the receptive field.


NeuroImage | 2017

Ghosts in machine learning for cognitive neuroscience: Moving from data to theory

Thomas A. Carlson; Erin Goddard; David M. Kaplan; Colin Klein; J. Brendan Ritchie

ABSTRACT The application of machine learning methods to neuroimaging data has fundamentally altered the field of cognitive neuroscience. Future progress in understanding brain function using these methods will require addressing a number of key methodological and interpretive challenges. Because these challenges often remain unseen and metaphorically “haunt” our efforts to use these methods to understand the brain, we refer to them as “ghosts”. In this paper, we describe three such ghosts, situate them within a more general framework from philosophy of science, and then describe steps to address them. The first ghost arises from difficulties in determining what information machine learning classifiers use for decoding. The second ghost arises from the interplay of experimental design and the structure of information in the brain – that is, our methods embody implicit assumptions about information processing in the brain, and it is often difficult to determine if those assumptions are satisfied. The third ghost emerges from our limited ability to distinguish information that is merely decodable from the brain from information that is represented and used by the brain. Each of the three ghosts place limits on the interpretability of decoding research in cognitive neuroscience. There are no easy solutions, but facing these issues squarely will provide a clearer path to understanding the nature of representation and computation in the human brain. HIGHLIGHTSProvides a philosophical framework for thinking about applications of machine learning to cognitive neuroscience datasets.Discussion of current challenges for neural decoding research.Gives suggestions about how to address contemporary challenges in decoding research.


NeuroImage | 2017

Interpreting the dimensions of neural feature representations revealed by dimensionality reduction

Erin Goddard; Colin Klein; Samuel G. Solomon; Hinze Hogendoorn; Thomas A. Carlson

ABSTRACT Recent progress in understanding the structure of neural representations in the cerebral cortex has centred around the application of multivariate classification analyses to measurements of brain activity. These analyses have proved a sensitive test of whether given brain regions provide information about specific perceptual or cognitive processes. An exciting extension of this approach is to infer the structure of this information, thereby drawing conclusions about the underlying neural representational space. These approaches rely on exploratory data‐driven dimensionality reduction to extract the natural dimensions of neural spaces, including natural visual object and scene representations, semantic and conceptual knowledge, and working memory. However, the efficacy of these exploratory methods is unknown, because they have only been applied to representations in brain areas for which we have little or no secondary knowledge. One of the best‐understood areas of the cerebral cortex is area MT of primate visual cortex, which is known to be important in motion analysis. To assess the effectiveness of dimensionality reduction for recovering neural representational space we applied several dimensionality reduction methods to multielectrode measurements of spiking activity obtained from area MT of marmoset monkeys, made while systematically varying the motion direction and speed of moving stimuli. Despite robust tuning at individual electrodes, and high classifier performance, dimensionality reduction rarely revealed dimensions for direction and speed. We use this example to illustrate important limitations of these analyses, and suggest a framework for how to best apply such methods to data where the structure of the neural representation is unknown. HIGHLIGHTSDimensionality reduction (DR) is used to infer neural representational structure.DR is often not hypothesis‐neutral or purely data‐driven, as claimed.DR results are difficult to interpret even for a well‐understood visual area.We present a framework for using DR as an exploratory analysis tool.


Journal of Vision | 2013

A new type of change blindness: Smooth, isoluminant color changes are monitored on a coarse spatial scale

Erin Goddard; Colin W. G. Clifford

Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.


Journal of Neurophysiology | 2017

Dynamic population codes of multiplexed stimulus features in primate area MT

Erin Goddard; Samuel G. Solomon; Thomas A. Carlson

The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision.NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.


Journal of Neurophysiology | 2017

A step toward understanding the human ventral visual pathway

Erin Goddard

The human ventral visual pathway is implicated in higher order form processing, but the organizational principles within this region are not yet well understood. Recently, Lafer-Sousa, Conway, and Kanwisher (J Neurosci 36: 1682-1697, 2016) used functional magnetic resonance imaging to demonstrate that functional responses in the human ventral visual pathway share a broad homology with the those in macaque inferior temporal cortex, providing new evidence supporting the validity of the macaque as a model of the human visual system in this region. In addition, these results give new clues for understanding the organizational principles within the ventral visual pathway and the processing of higher order color and form, suggesting new avenues for research into this cortical region.

Collaboration


Dive into the Erin Goddard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin W. G. Clifford

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge