Jason S. Chan
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jason S. Chan.
Neuroscience Letters | 2005
Daniel Sanabria; Salvador Soto-Faraco; Jason S. Chan; Charles Spence
We investigated the extent to which intramodal visual perceptual grouping influences the multisensory integration (or grouping) of auditory and visual motion information. Participants discriminated the direction of motion of two sequentially presented sounds (moving leftward or rightward), while simultaneously trying to ignore a task-irrelevant visual apparent motion stream. The principles of perceptual grouping were used to vary the direction and extent of apparent motion within the irrelevant modality (vision). The results demonstrate that the multisensory integration of motion information can be modulated by the perceptual grouping taking place unimodally within vision, suggesting that unimodal perceptual grouping processes precede multisensory integration. The present study therefore illustrates how intramodal and crossmodal perceptual grouping processes interact to determine how the information in complex multisensory environments is parsed.
PLOS ONE | 2014
Carmel A. Levitan; Jiana Ren; Andy T. Woods; Sanne Boesveldt; Jason S. Chan; Kirsten J. McKenzie; Michael V. Dodson; Jai Levin; Christine Xiang Ru Leong; Jasper J. F. van den Bosch
Colors and odors are associated; for instance, people typically match the smell of strawberries to the color pink or red. These associations are forms of crossmodal correspondences. Recently, there has been discussion about the extent to which these correspondences arise for structural reasons (i.e., an inherent mapping between color and odor), statistical reasons (i.e., covariance in experience), and/or semantically-mediated reasons (i.e., stemming from language). The present study probed this question by testing color-odor correspondences in 6 different cultural groups (Dutch, Netherlands-residing-Chinese, German, Malay, Malaysian-Chinese, and US residents), using the same set of 14 odors and asking participants to make congruent and incongruent color choices for each odor. We found consistent patterns in color choices for each odor within each culture, showing that participants were making non-random color-odor matches. We used representational dissimilarity analysis to probe for variations in the patterns of color-odor associations across cultures; we found that US and German participants had the most similar patterns of associations, followed by German and Malay participants. The largest group differences were between Malay and Netherlands-resident Chinese participants and between Dutch and Malaysian-Chinese participants. We conclude that culture plays a role in color-odor crossmodal associations, which likely arise, at least in part, through experience.
Attention Perception & Psychophysics | 2008
Jason S. Chan; Fiona N. Newell
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.
Cognitive, Affective, & Behavioral Neuroscience | 2004
Daniel Sanabria; Salvador Soto-Faraco; Jason S. Chan; Charles Spence
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.
Current Alzheimer Research | 2015
Jason S. Chan; Jochen Kaiser; Mareike Brandl; Silke Matura; David Prvulovic; Michael Hogan; Marcus J. Naumer
Previous studies investigating mild cognitive impairment (MCI) have focused primarily on cognitive, memory, attention, and executive function deficits. There has been relatively little research on the perceptual deficits people with MCI may exhibit. This is surprising given that it has been suggested that sensory and cognitive functions share a common cortical framework [1]. In the following study, we presented the sound-induced flash illusion (SiFi) to a group of participants with mild cognitive impairment (MCI) and healthy controls (HC). The SiFi is an audio-visual illusion whereby two-beeps and one-flash are presented. Participants tend to perceive two flashes when the time-interval between the auditory beeps is small [2, 3]. Participants with MCI perceived significantly more illusions compared to HC over longer auditory time-intervals. This suggests that MCIs integrate more (arguably irrelevant) audiovisual information compared to HCs. By incorporating perceptual tasks into a clinical diagnosis it may be possible to gain a more comprehensive understanding into the disease, as well as provide a more accurate diagnose to those who may have a language impairment.
Perception | 2012
Jason S. Chan; Corrina Maguinness; Danuta Lisiecka; Annalisa Setti; Fiona N. Newell
Auditory stimuli are known to improve visual target recognition and detection when both are presented in the same spatial location. However, most studies have focused on crossmodal spatial congruency along the horizontal plane and the effects of audio-visual spatial congruency in depth (ie along the depth axis) are relatively less well understood. In the following experiments we presented a visual (face) or auditory (voice) target stimulus in a location on a spatial array which was either spatially congruent or incongruent in depth (ie positioned directly in front or behind) with a crossmodal stimulus. The participants task was to determine whether a visual (experiments 1 and 3) or auditory (experiment 2) target was located in the foreground or background of this array. We found that both visual and auditory targets were less accurately located when crossmodal stimuli were presented from different, compared to congruent, locations in depth. Moreover, this effect was particularly found for visual targets located in the periphery, although spatial incongruency affected the location of auditory targets across both locations. The relative distance of the array to the observer did not seem to modulate this congruency effect (experiment 3). Our results add to the growing evidence for multisensory influences on search performance and extend these findings to the localisation of targets in the depth plane.
Behavior Research Methods | 2007
Jason S. Chan; Thorsten Maucher; Johannes Schemmel; Dana Kilroy; Fiona N. Newell; K. Meier
In order to understand better the processes involved in the perception of shape through touch, some element of control is required over the nature of the shape presented to the hand and the presentation timing. To that end, we have developed a cost-effective, computer-controlled apparatus for presenting haptic stimuli using active touch, known as avirtual haptic display (VHD). The operational principle behind this device is that it translates black and white visual images into topographic, 2-D taxel (tactile pixel) arrays, along the same principle using in Braille letters. These taxels are either elevated or depressed at any one time representing white and black pixel colors of the visual image, respectively. To feel the taxels, the participant places their fingers onto a carriage which can be moved over the surface of the device to reveal a virtual shape. We conducted two experiments and the results show that untrained participants are able to recognize different, simple and complex, shapes using this apparatus. The VHD apparatus is therefore ideal at presenting 2-D shapes through touch alone. Moreover, this device and its supporting software can also be used for presenting computer-controlled stimuli in cross-modal experiments.
NeuroImage | 2010
Jason S. Chan; Cristina Simões-Franklin; Hugh Garavan; Fiona N. Newell
Although many studies have found similar cortical areas activated during the recognition of objects encoded through vision or touch, little is known about cortical areas involved in the crossmodal recognition of dynamic objects. Here, we investigated which cortical areas are involved in the recognition of moving objects and were specifically interested in whether motion areas are involved in the recognition of dynamic objects within and across sensory modalities. Prior to scanning, participants first learned to recognise a set of 12 novel objects, each presented either visually or haptically, and either moving or stationary. We then conducted fMRI whilst participants performed an old-new task with static images of learned or not-learned objects. We found the fusiform and right inferior frontal gyri more activated to within-modal visual than crossmodal object recognition. Our results also revealed increased activation in area hMT+, LOC and the middle occipital gyrus, in the right hemisphere only, for the objects learned as moving compared to the learned static objects, regardless of modality. We propose that the network of cortical areas involved in the recognition of dynamic objects is largely independent of modality and have important implications for understanding the neural substrates of multisensory dynamic object recognition.
Multisensory Research | 2018
Jason S. Chan; Shannon K. Connolly; Annalisa Setti
The sound-induced flash illusion is a multisensory illusion occurring when one flash is presented with two beeps and perceived as two flashes. Younger individuals are largely susceptible to the illusion when the stimulus onset asynchrony between the first and the second beep falls within the temporal window of integration, but the susceptibility falls dramatically outside of this short temporal range. Older individuals, in particular older adults prone to falling and/or mild cognitive impairment, show an extended susceptibility to the illusion. This suggests that they have inefficient multisensory integration, particularly in the temporal domain. In the present study, we investigated the reliability of the illusion across younger and older people, guided by the hypothesis that the experimental context, i.e., exposure to a wider or smaller number of stimulus onset asynchronies, would modify the intra-personal susceptibility to the illusion at shorter asynchronies vs. longer asynchronies, likely due to the gathering of model evidence based on Bayesian inference. We tested 22 young adults and 29 older adults and verified these hypotheses. Both groups showed higher susceptibility to the illusion when exposed to a smaller range of asynchronies, but only for longer ones, not within the 100 ms window. We discuss the theoretical implications in terms of online perceptual learning and practical implications in terms of standardisation of the experimental context when attempting to find normative values.
International Journal of Autonomous and Adaptive Communications Systems | 2013
Jason S. Chan; Fiona N. Newell
Previous studies found that performance in tactile or haptic spatial tasks improved when non-informative visual information was available, suggesting that vision provides a precise spatial frame to which tactile information is referred. Here, we explored whether another intrinsically spatial modality, audition, can also affect haptic recognition. In all experiments, blindfolded participants first learned a scene through touch and were subsequently required to recognise the scene. We found no effect on haptic performance when white noise stimuli were presented from specific locations Experiment 1. However, performance was significantly reduced by pure tone stimuli presented from the same locations Experiment 2, moreover, these tones disrupted recall but not encoding of the haptic scene Experiment 3. In Experiment 4, we found that spatial rather than non-spatial auditory information was required to affect haptic performance. Finally, in Experiment 5 we found no specific benefit for familiar sound cues over unfamiliar or no sounds on haptic spatial performance. Our findings suggest that, in contrast to vision, auditory information is unlikely to have sufficient spatial precision therefore disrupting the spatial representation of haptic information. Our results add to a growing body of evidence for multisensory influences in the perception of space.