Robyn Kim
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robyn Kim.
Current Biology | 2006
Aaron R. Seitz; Robyn Kim; Ladan Shams
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.
Physics of Life Reviews | 2010
Ladan Shams; Robyn Kim
Vision is generally considered the dominant sensory modality; self-contained and independent of other senses. In this article, we will present recent results that contradict this view, and show that visual perception can be strongly altered by sound and touch, and such alterations can occur even at early stages of processing, as early as primary visual cortex. We will first review the behavioral evidence demonstrating modulation of visual perception by other modalities. As extreme examples of such modulations, we will describe two visual illusions induced by sound, and a visual illusion induced by touch. Next, we will discuss studies demonstrating modulation of activity in visual areas by stimulation of other modalities, and discuss possible pathways that could underpin such interactions. This will be followed by a discussion of how crossmodal interactions can affect visual learning and adaptation. We will review several studies showing crossmodal effects on visual learning. We will conclude with a discussion of computational principles governing these crossmodal interactions, and review several recent studies that demonstrate that these interactions are statistically optimal.
PLOS ONE | 2008
Robyn Kim; Aaron R. Seitz; Ladan Shams
Background Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. Methodology/Principle Findings Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. Conclusions/Significance This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.
Perception | 2007
Aaron R. Seitz; Robyn Kim; Virginie van Wassenhove; Ladan Shams
Although humans are almost constantly exposed to stimuli from multiple sensory modalities during daily life, the processes by which we learn to integrate information from multiple senses to acquire knowledge of multisensory objects are not well understood. Here, we present results of a novel audio – visual statistical learning procedure where participants are passively exposed to a rapid serial presentation of arbitrary audio — visual pairings (comprised of artificial/synthetic audio and visual stimuli). Following this exposure, participants were tested with a two-interval forced-choice procedure in which their degree of familiarity with the experienced audio-visual pairings was evaluated against novel audio — visual combinations drawn from the same stimulus set. Our results show that subjects acquire knowledge of visual — visual, audio — audio, and audio — visual stimulus associations and that the learning of these types of associations occurs in an independent manner.
Frontiers in Psychology | 2011
Ladan Shams; David R. Wozny; Robyn Kim; Aaron R. Seitz
Multisensory perception has been the focus of intense investigation in recent years. It is now well-established that crossmodal interactions are ubiquitous in perceptual processing and endow the system with improved precision, accuracy, processing speed, etc. While these findings have shed much light on principles and mechanisms of perception, ultimately it is not very surprising that multiple sources of information provides benefits in performance compared to a single source of information. Here, we argue that the more surprising recent findings are those showing that multisensory experience also influences the subsequent unisensory processing. For example, exposure to auditory–visual stimuli can change the way that auditory or visual stimuli are processed subsequently even in isolation. We review three sets of findings that represent three different types of learning ranging from perceptual learning, to sensory recalibration, to associative learning. In all these cases exposure to multisensory stimuli profoundly influences the subsequent unisensory processing. This diversity of phenomena may suggest that continuous modification of unisensory representations by multisensory relationships may be a general learning strategy employed by the brain.
Psychological Science | 2012
Robyn Kim; Megan A.K. Peters; Ladan Shams
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.
Neuroscience Letters | 2009
Robyn Kim; Aaron R. Seitz; Heather Feenstra; Ladan Shams
Physics of Life Reviews | 2010
Ladan Shams; Robyn Kim
Journal of Vision | 2010
Robyn Kim; Aaron R. Seitz; Ladan Shams
Journal of Vision | 2010
Robyn Kim; Aaron R. Seitz; Ladan Shams