Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yi-Chuan Chen is active.

Publication


Featured researches published by Yi-Chuan Chen.


Experimental Brain Research | 1995

The functions of the medial premotor cortex

D. Thaler; Yi-Chuan Chen; Philip D. Nixon; Chantal E. Stern; Richard E. Passingham

We report several studies on the effects of removing the medial premotor cortex (supplementary motor area) in monkeys. The removal of this area alone does not cause either paralysis or akinesia. However, the animals were poor at performing a simple learned task in which they had to carry out an arbitrary action: they were taught to raise their arm in order to obtain food in a foodwell below. They were impaired whether they worked in the light or the dark. They were impaired when they had to perform the movements at their own pace, but much less impaired when a tone paced performance.Monkeys with lesions in the anterior cingulate cortex were as impaired as monkeys with medial premotor lesions at performing this task at their own pace. However, monkeys with lateral premotor lesions were less impaired. We conclude that the medial premotor areas play a crucial role in the performance of learned movements when there is no external stimulus to prompt performance.


Frontiers in Psychology | 2011

Crossmodal constraints on human perceptual awareness: auditory semantic modulation of binocular rivalry.

Yi-Chuan Chen; Su-Ling Yeh; Charles Spence

We report a series of experiments utilizing the binocular rivalry paradigm designed to investigate whether auditory semantic context modulates visual awareness. Binocular rivalry refers to the phenomenon whereby when two different figures are presented to each eye, observers perceive each figure as being dominant in alternation over time. The results demonstrate that participants report a particular percept as being dominant for less of the time when listening to an auditory soundtrack that happens to be semantically congruent with the other alternative (i.e., the competing) percept, as compared to when listening to an auditory soundtrack that was irrelevant to both visual figures (Experiment 1A). When a visually presented word was provided as a semantic cue, no such semantic modulatory effect was observed (Experiment 1B). We also demonstrate that the crossmodal semantic modulation of binocular rivalry was robustly observed irrespective of participants’ attentional control over the dichoptic figures and the relative luminance contrast between the figures (Experiments 2A and 2B). The pattern of crossmodal semantic effects reported here cannot simply be attributed to the meaning of the soundtrack guiding participants’ attention or biasing their behavioral responses. Hence, these results support the claim that crossmodal perceptual information can serve as a constraint on human visual awareness in terms of their semantic congruency.


Philosophical Transactions of the Royal Society B | 2014

Multisensory constraints on awareness

Ophelia Deroy; Yi-Chuan Chen; Charles Spence

Given that multiple senses are often stimulated at the same time, perceptual awareness is most likely to take place in multisensory situations. However, theories of awareness are based on studies and models established for a single sense (mostly vision). Here, we consider the methodological and theoretical challenges raised by taking a multisensory perspective on perceptual awareness. First, we consider how well tasks designed to study unisensory awareness perform when used in multisensory settings, stressing that studies using binocular rivalry, bistable figure perception, continuous flash suppression, the attentional blink, repetition blindness and backward masking can demonstrate multisensory influences on unisensory awareness, but fall short of tackling multisensory awareness directly. Studies interested in the latter phenomenon rely on a method of subjective contrast and can, at best, delineate conditions under which individuals report experiencing a multisensory object or two unisensory objects. As there is not a perfect match between these conditions and those in which multisensory integration and binding occur, the link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases. These challenges point at the need to question the very idea of multisensory awareness.


Experimental Brain Research | 2009

Catch the moment: multisensory enhancement of rapid visual events by sound.

Yi-Chuan Chen; Su-Ling Yeh

Repetition blindness (RB) is a visual deficit, wherein observers fail to perceive the second occurrence of a repeated item in a rapid serial visual presentation stream. Chen and Yeh (Psychon Bull Rev 15:404–408, 2008) recently observed a reduction of the RB effect when the repeated items were accompanied by two sounds. The current study further manipulated the pitch of the two sounds (same versus different) in order to examine whether this cross-modal facilitation effect is caused by the multisensory enhancement of the visual event by sound, or multisensory Gestalt (perceptual grouping) of a new representation formed by combining the visual and auditory inputs. The results showed robust facilitatory effects of sound on RB regardless of the pitch of the sounds (Experiment 1), despite an effort to further increase the difference in pitch (Experiment 2). Experiment 3 revealed a close link between participants’ awareness of pitch and the effect of pitch on the RB effect. We conclude that the facilitatory effect of sound on RB results from multisensory enhancement of the perception of visual events by auditory signals.


Psychonomic Bulletin & Review | 2008

Visual events modulated by sound in repetition blindness

Yi-Chuan Chen; Su-Ling Yeh

Repetition blindness (RB; Kanwisher, 1987) is the term used to describe people’s failure to detect or report an item that is repeated in a rapid serial visual presentation (RSVP) stream. Although RB is, by definition, a visual deficit, whether it is affected by an auditory signal remains unknown. In the present study, we added two sounds before, simultaneous with, or after the onset of the two critical visual items during RSVP to examine the effect of sound on RB. The results show that the addition of the sounds effectively reduced RB when they appeared at, or around, the critical items. These results indicate that it is easier to perceive an event containing multisensory information than unisensory ones. Possible mechanisms of how visual and auditory information interact are discussed.


Frontiers in Psychology | 2017

Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.

Yi-Chuan Chen; Charles Spence

There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top–down factors on multisensory integration/perception in humans. One such top–down influence, often referred to in the literature as the ‘unity assumption,’ is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.


Psychonomic Bulletin & Review | 2017

Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing

Yi-Chuan Chen; Charles Spence

The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.


Multisensory Research | 2013

The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

Yi-Chuan Chen; Charles Spence

The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.


Attention Perception & Psychophysics | 2018

Symmetry and its role in the crossmodal correspondence between shape and taste

Nora Turoman; Carlos Velasco; Yi-Chuan Chen; Pi-Chun Huang; Charles Spence

Despite the rapid growth of research on the crossmodal correspondence between visually presented shapes and basic tastes (e.g., sweet, sour, bitter, and salty), most studies that have been published to date have focused on shape contour (roundness/angularity). Meanwhile, other important features, such as symmetry, as well as the underlying mechanisms of the shape–taste correspondence, have rarely been studied. Over two experiments, we systematically manipulated the symmetry and contours of shapes and measured the influences of these variables on shape–taste correspondences. Furthermore, we investigated a potential underlying mechanism, based on the common affective appraisal of stimuli in different sensory modalities. We replicated the results of previous studies showing that round shapes are associated with sweet taste, whereas angular shapes are associated with sour and bitter tastes. In addition, we demonstrated a novel effect that the symmetry group of a shape influences how it is associated with taste. A significant relationship was observed between the taste and appraisal scores of the shapes, suggesting that the affective factors of pleasantness and threat underlie the shape–taste correspondence. These results were consistent across cultures, when we compared participants from Taiwanese and Western (UK, US, Canada) cultures. Our findings highlight that perceived pleasantness and threat are culturally common factors involved in at least some crossmodal correspondences.


Language, cognition and neuroscience | 2017

Examining radical position and function in Chinese character recognition using the repetition blindness paradigm

Yi-Chuan Chen; Su-Ling Yeh

ABSTRACT Repetition blindness (RB) is the failure to report the second occurrence of repeated items in a rapid serial visual presentation stream. The two-stage model of RB by Bavelier (1994) states that more properties shared between the repeated items lead to a larger RB effect. We used RB paradigm to examine the position (left or right) and the function (semantic or phonetic) of radicals in Chinese character recognition. Compared to the repeated radicals with the same position and function, RB was reduced when they were in different positions (Experiment 1A), but not when they had different functions (Experiment 1B). Similar RB-effect was observed when only one, or both, of the repeated radicals provided valid semantic or phonetic cues to characters (Experiments 2A and 2B). These results suggest that radicals are encoded with position but not function information. The radical function is likely implemented in lateral connections between semantic and phonological representations of characters.

Collaboration


Dive into the Yi-Chuan Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Su-Ling Yeh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jhih-Yun Hsiao

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge