Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rocco Chiou is active.

Publication


Featured researches published by Rocco Chiou.


Perception | 2012

Cross-modality correspondence between pitch and spatial location modulates attentional orienting

Rocco Chiou; Anina N. Rich

The brain constantly integrates incoming signals across the senses to form a cohesive view of the world. Most studies on multisensory integration concern the roles of spatial and temporal parameters. However, recent findings suggest cross-modal correspondences (eg high-pitched sounds associated with bright, small objects located high up) also affect multisensory integration. Here, we focus on the association between auditory pitch and spatial location. Surprisingly little is known about the cognitive and perceptual roots of this phenomenon, despite its long use in ergonomic design. In a series of experiments, we explore how this cross-modal mapping affects the allocation of attention with an attentional cuing paradigm. Our results demonstrate that high and low tones induce attention shifts to upper or lower locations, depending on pitch height. Furthermore, this pitch-induced cuing effect is susceptible to contextual manipulations and volitional control. These findings suggest the cross-modal interaction between pitch and location originates from an attentional level rather than from response mapping alone. The flexible contextual mapping between pitch and location, as well as its susceptibility to top–down control, suggests the pitch-induced cuing effect is primarily mediated by cognitive processes after initial sensory encoding and occurs at a relatively late stage of voluntary attention orienting.


Frontiers in Psychology | 2014

The role of conceptual knowledge in understanding synaesthesia: Evaluating contemporary findings from a "hub-and-spokes" perspective.

Rocco Chiou; Anina N. Rich

Synesthesia is a phenomenon in which stimulation in one sensory modality triggers involuntary experiences typically not associated with that stimulation. Inducing stimuli (inducers) and synesthetic experiences (concurrents) may occur within the same modality (e.g., seeing colors while reading achromatic text) or span across different modalities (e.g., tasting flavors while listening to music). Although there has been considerable progress over the last decade in understanding the cognitive and neural mechanisms of synesthesia, the focus of current neurocognitive models of synesthesia does not encompass many crucial psychophysical characteristics documented in behavioral research. Prominent theories of the neurophysiological basis of synesthesia construe it as a perceptual phenomenon and hence focus primarily on the modality-specific brain regions for perception. Many behavioral studies, however, suggest an essential role for conceptual-level information in synesthesia. For example, there is evidence that synesthetic experience arises subsequent to identification of an inducing stimulus, differs substantially from real perceptual events, can be akin to perceptual memory, and is susceptible to lexical/semantic contexts. These data suggest that neural mechanisms lying beyond the realm of the perceptual cortex (especially the visual system), such as regions subserving conceptual knowledge, may play pivotal roles in the neural architecture of synesthesia. Here we discuss the significance of non-perceptual mechanisms that call for a re-evaluation of the emphasis on synesthesia as a perceptual phenomenon. We also review recent studies which hint that some aspects of synesthesia resemble our general conceptual knowledge for object attributes, at both psychophysical and neural levels. We then present a conceptual-mediation model of synesthesia in which the inducer and concurrent are linked within a conceptual-level representation. This “inducer-to-concurrent” nexus is maintained within a supramodal “hub,” while the subjective (bodily) experience of its resultant concurrent (e.g., a color) may then require activation of “spokes” in the perception-related cortices. This hypothesized “hub-and-spoke” structure would engage a distributed network of cortical regions and may account for the full breadth of this intriguing phenomenon.


Cortex | 2016

The anterior temporal cortex is a primary semantic source of top-down influences on object recognition

Rocco Chiou; Matthew A. Lambon Ralph

Perception emerges from a dynamic interplay between feed-forward sensory input and feedback modulation along the cascade of neural processing. Prior knowledge, a major form of top-down modulatory signal, benefits perception by enabling efficacious inference and resolving ambiguity, particularly under circumstances of degraded visual input. Despite semantic information being a potentially critical source of this top-down influence, to date, the core neural substrate of semantic knowledge (the anterolateral temporal lobe – ATL) has not been considered as a key component of the feedback system. Here we provide direct evidence of its significance for visual cognition – the ATL underpins the semantic aspect of object recognition, amalgamating sensory-based (amount of accumulated sensory input) and semantic-based (representational proximity between exemplars and typicality of appearance) influences. Using transcranial theta-burst stimulation combined with a novel visual identification paradigm, we demonstrate that the left ATL contributes to discrimination between visual objects. Crucially, its contribution is especially vital under situations where semantic knowledge is most needed for supplementing deficiency of input (brief visual exposure), discerning analogously-coded exemplars (close representational distance), and resolving discordance (target appearance violating the statistical typicality of its category). Our findings characterise functional properties of the ATL in object recognition: this neural structure is summoned to augment the visual system when the latter is overtaxed by challenging conditions (insufficient input, overlapped neural coding, and conflict between incoming signal and expected configuration). This suggests a need to revisit current theories of object recognition, incorporating the ATL that interfaces high-level vision with semantic knowledge.


Journal of Cognitive Neuroscience | 2014

A conceptual lemon: Theta burst stimulation to the left anterior temporal lobe untangles object representation and its canonical color

Rocco Chiou; Paul F. Sowman; Andrew C. Etchell; Anina N. Rich

Object recognition benefits greatly from our knowledge of typical color (e.g., a lemon is usually yellow). Most research on object color knowledge focuses on whether both knowledge and perception of object color recruit the well-established neural substrates of color vision (the V4 complex). Compared with the intensive investigation of the V4 complex, we know little about where and how neural mechanisms beyond V4 contribute to color knowledge. The anterior temporal lobe (ATL) is thought to act as a “hub” that supports semantic memory by integrating different modality-specific contents into a meaningful entity at a supramodal conceptual level, making it a good candidate zone for mediating the mappings between object attributes. Here, we explore whether the ATL is critical for integrating typical color with other object attributes (object shape and name), akin to its role in combining nonperceptual semantic representations. In separate experimental sessions, we applied TMS to disrupt neural processing in the left ATL and a control site (the occipital pole). Participants performed an object naming task that probes color knowledge and elicits a reliable color congruency effect as well as a control quantity naming task that also elicits a cognitive congruency effect but involves no conceptual integration. Critically, ATL stimulation eliminated the otherwise robust color congruency effect but had no impact on the numerical congruency effect, indicating a selective disruption of object color knowledge. Neither color nor numerical congruency effects were affected by stimulation at the control occipital site, ruling out nonspecific effects of cortical stimulation. Our findings suggest that the ATL is involved in the representation of object concepts that include their canonical colors.


eLife | 2016

Sensory dynamics of visual hallucinations in the normal population

Joel Pearson; Rocco Chiou; Sebastian Rogers; Marcus Wicken; Stewart Heitmann; Bard Ermentrout

Hallucinations occur in both normal and clinical populations. Due to their unpredictability and complexity, the mechanisms underlying hallucinations remain largely untested. Here we show that visual hallucinations can be induced in the normal population by visual flicker, limited to an annulus that constricts content complexity to simple moving grey blobs, allowing objective mechanistic investigation. Hallucination strength peaked at ~11 Hz flicker and was dependent on cortical processing. Hallucinated motion speed increased with flicker rate, when mapped onto visual cortex it was independent of eccentricity, underwent local sensory adaptation and showed the same bistable and mnemonic dynamics as sensory perception. A neural field model with motion selectivity provides a mechanism for both hallucinations and perception. Our results demonstrate that hallucinations can be studied objectively, and they share multiple mechanisms with sensory perception. We anticipate that this assay will be critical to test theories of human consciousness and clinical models of hallucination. DOI: http://dx.doi.org/10.7554/eLife.17072.001


Perception | 2015

Volitional Mechanisms Mediate the Cuing Effect of Pitch on Attention Orienting: The Influences of Perceptual Difficulty and Response Pressure.

Rocco Chiou; Anina N. Rich

Our cognitive system tends to link auditory pitch with spatial location in a specific manner (ie high-pitched sounds are usually associated with an upper location, and low sounds are associated with a lower location). Recent studies have demonstrated that this cross-modality association biases the allocation of visual attention and affects performance despite the auditory stimuli being irrelevant to the behavioural task. There is, however, a discrepancy between studies in their interpretation of the underlying mechanisms. Whereas we have previously claimed that the pitch-location mapping is mediated by volitional shifts of attention (Chiou & Rich, 2012, Perception, 41, 339–353), other researchers suggest that this cross-modal effect reflects automatic shifts of attention (Mossbridge, Grabowecky, & Suzuki, 2011, Cognition, 121, 133–139). Here we report a series of three experiments examining the effects of perceptual and response-related pressure on the ability of nonpredictive pitch to bias visual attention. We compare it with two control cues: a predictive pitch that triggers voluntary attention shifts and a salient peripheral flash that evokes involuntary shifts. The results show that the effect of nonpredictive pitch is abolished by pressure at either perceptual or response levels. By contrast, the effects of the two control cues remain significant, demonstrating the robustness of informative and perceptually salient stimuli in directing attention. This distinction suggests that, in contexts of high perceptual demand and response pressure, cognitive resources are primarily engaged by the task-relevant stimuli, which effectively prevents uninformative pitch from orienting attention to its cross-modally associated location. These findings are consistent with the hypothesis that the link between pitch and location affects attentional deployment via volitional rather than automatic mechanisms.


NeuroImage | 2018

The anterior-ventrolateral temporal lobe contributes to boosting visual working memory capacity for items carrying semantic information

Rocco Chiou; Matthew A. Lambon Ralph

ABSTRACT Working memory (WM) is a buffer that temporarily maintains information, be it visual or auditory, in an active state, caching its contents for online rehearsal or manipulation. How the brain enables long‐term semantic knowledge to affect the WM buffer is a theoretically significant issue awaiting further investigation. In the present study, we capitalise on the knowledge about famous individuals as a ‘test‐case’ to study how it impinges upon WM capacity for human faces and its neural substrate. Using continuous theta‐burst transcranial stimulation combined with a psychophysical task probing WM storage for varying contents, we provide compelling evidence that (1) faces (regardless of familiarity) continued to accrue in the WM buffer with longer encoding time, whereas for meaningless stimuli (colour shades) there was little increment; (2) the rate of WM accrual was significantly more efficient for famous faces, compared to unknown faces; (3) the right anterior‐ventrolateral temporal lobe (ATL) causally mediated this superior WM storage for famous faces. Specifically, disrupting the ATL (a region tuned to semantic knowledge including person identity) selectively hinders WM accrual for celebrity faces while leaving the accrual for unfamiliar faces intact. Further, this ‘semantically‐accelerated’ storage is impervious to disruption of the right middle frontal gyrus and vertex, supporting the specific and causative contribution of the right ATL. Our finding advances the understanding of the neural architecture of WM, demonstrating that it depends on interaction with long‐term semantic knowledge underpinned by the ATL, which causally expands the WM buffer when visual content carries semantic information.


Cortex | 2018

Controlled semantic cognition relies upon dynamic and flexible interactions between the executive ‘semantic control’ and hub-and-spoke ‘semantic representation’ systems

Rocco Chiou; Gina F. Humphreys; JeYoung Jung; Matthew A. Lambon Ralph

Built upon a wealth of neuroimaging, neurostimulation, and neuropsychology data, a recent proposal set forth a framework termed controlled semantic cognition (CSC) to account for how the brain underpins the ability to flexibly use semantic knowledge (Lambon Ralph et al., 2017; Nature Reviews Neuroscience). In CSC, the ‘semantic control’ system, underpinned predominantly by the prefrontal cortex, dynamically monitors and modulates the ‘semantic representation’ system that consists of a ‘hub’ (anterior temporal lobe, ATL) and multiple ‘spokes’ (modality-specific areas). CSC predicts that unfamiliar and exacting semantic tasks should intensify communication between the ‘control’ and ‘representation’ systems, relative to familiar and less taxing tasks. In the present study, we used functional magnetic resonance imaging (fMRI) to test this hypothesis. Participants paired unrelated concepts by canonical colours (a less accustomed task – e.g., pairing ketchup with fire-extinguishers due to both being red) or paired well-related concepts by semantic relationship (a typical task – e.g., ketchup is related to mustard). We found the ‘control’ system was more engaged by atypical than typical pairing. While both tasks activated the ATL ‘hub’, colour pairing additionally involved occipitotemporal ‘spoke’ regions abutting areas of hue perception. Furthermore, we uncovered a gradient along the ventral temporal cortex, transitioning from the caudal ‘spoke’ zones preferring canonical colour processing to the rostral ‘hub’ zones preferring semantic relationship. Functional connectivity also differed between the tasks: Compared with semantic pairing, colour pairing relied more upon the inferior frontal gyrus, a key node of the control system, driving enhanced connectivity with occipitotemporal ‘spoke’. Together, our findings characterise the interaction within the neural architecture of semantic cognition – the control system dynamically heightens its connectivity with relevant components of the representation system, in response to different semantic contents and difficulty levels.


Cognition | 2018

Exploring the functional nature of synaesthetic colour: Dissociations from colour perception and imagery

Rocco Chiou; Anina N. Rich; Sebastian Rogers; Joel Pearson

Individuals with grapheme-colour synaesthesia experience anomalous colours when reading achromatic text. These unusual experiences have been said to resemble ‘normal’ colour perception or colour imagery, but studying the nature of synaesthesia remains difficult. In the present study, we report novel evidence that synaesthetic colour impacts conscious vision in a way that is different from both colour perception and imagery. Presenting ‘normal’ colour prior to binocular rivalry induces a location-dependent suppressive bias reflecting local habituation. By contrast, a grapheme that evokes synaesthetic colour induces a facilitatory bias reflecting priming that is not constrained to the inducing grapheme’s location. This priming does not occur in non-synaesthetes and does not result from response bias. It is sensitive to diversion of visual attention away from the grapheme, but resistant to sensory perturbation, reflecting a reliance on cognitive rather than sensory mechanisms. Whereas colour imagery in non-synaesthetes causes local priming that relies on the locus of imagined colour, imagery in synaesthetes caused global priming not dependent on the locus of imagery. These data suggest a unique psychophysical profile of high-level colour processing in synaesthetes. Our novel findings and method will be critical to testing theories of synaesthesia and visual awareness.


Clinical Eeg and Neuroscience | 2012

Beyond colour perception : auditory synaesthesia elicits visual experience of colour, shape, and spatial location

Rocco Chiou; Marleen Stelter; Anina N. Rich

The N400 is a human neuroelectric response to semantic incongruity in on-line sentence processing, and implausibility in context has been identified as one of the factors that influence the size of the N400. In this paper we investigate whether predictors derived from Latent Semantic Analysis, language models, and Roark’s parser are significant in modeling of the N400m (the neuromagnetic version of the N400). We also investigate significance of a novel pairwise-priming language model based on the IBM Model 1 translation model. Our experiments show that all the predictors are significant. Moreover, we show that predictors based on the 4-gram language model and the pairwise-priming language model are highly correlated with the manual annotation of contextual plausibility, suggesting that these predictors are capable of playing the same role as the manual annotations in prediction of the N400m response. We also show that the proposed predictors can be grouped into two clusters of significant predictors, suggesting that each cluster is capturing a different characteristic of the N400m response.

Collaboration


Dive into the Rocco Chiou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joel Pearson

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Rogers

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Denise H. Wu

National Central University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge