Christopher W. Robinson
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher W. Robinson.
Cognitive Science | 2008
Vladimir M. Sloutsky; Christopher W. Robinson
Although it is well documented that language plays an important role in cognitive development, there are different views concerning the mechanisms underlying these effects. Some argue that even early in development, effects of words stem from top-down knowledge, whereas others argue that these effects stem from auditory input affecting attention allocated to visual input. Previous research (e.g., Robinson & Sloutsky, 2004a) demonstrated that non-speech sounds attenuate processing of corresponding visual input at 8, 12, and 16 months of age, whereas the current study demonstrates that words attenuate visual processing at 10 months but not at 16 months (Experiment 1). Furthermore, prefamiliarization with non-speech sounds (Experiment 2) resulted in able processing of visual input by 16-month-olds. These findings suggest that some effects of labels found early in development may stem from familiarity with human speech. The possibility of general-auditory factors underlying the effects of words on cognitive development is discussed.
Journal of Experimental Child Psychology | 2010
Christopher W. Robinson; Vladimir M. Sloutsky
Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of auditory and visual stimuli was assessed during a subsequent testing phase. In Experiment 2, the familiarity of the auditory or visual stimulus was systematically manipulated by prefamiliarizing infants to either the auditory or visual stimulus prior to the experiment proper. With the exception of the prefamiliarized auditory condition in Experiment 2, infants in the multimodal conditions failed to increase looking when the visual component changed at test. This finding is noteworthy given that infants discriminated the same visual stimuli when presented unimodally, and there was no evidence that multimodal presentation attenuated auditory processing. Possible factors underlying these effects are discussed.
Frontiers in Psychology | 2012
Christopher W. Robinson; Catherine A. Best; Wei (Sophia) Deng; Vladimir M. Sloutsky
The current review focuses on how exposure to linguistic input, and count nouns in particular, affect performance on various cognitive tasks, including individuation, categorization and category learning, and inductive inference. We review two theoretical accounts of effects of words. Proponents of one account argue that words have top-down effects on cognitive tasks, and, as such, function as supervisory signals. Proponents of the other account suggest that early in development, words, just like any other perceptual feature, are first and foremost part of the stimulus input and influence cognitive tasks in a bottom-up, non-supervisory fashion. We then review evidence supporting each account. We conclude that, although much research is needed, there is a large body of evidence indicating that words start out like other perceptual features and become supervisory signals in the course of development.
Experimental Psychology | 2013
Christopher W. Robinson; Vladimir M. Sloutsky
Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.
Cognition | 2013
Vladimir M. Sloutsky; Christopher W. Robinson
Many objects and events can be categorized in different ways, and learning multiple categories in parallel often requires flexibly attending to different stimulus dimensions in different contexts. Although infants and young children often exhibit poor attentional control, several theoretical proposals argue that such flexibility can be achieved without selective attention. If this is the case, then even young infants should be able to learn multiple dimension-context contingencies in parallel. This possibility was tested in four experiments with 14- and 22-month-olds. Learning of contingencies succeeded as long as there were multiple correlations between the context and the to-be-learned dimension. These findings suggest that infants can learn multiple dimension-context contingencies in parallel, but only when there is sufficient redundancy in the input.
Infant Behavior & Development | 1999
Peg Hull Smith; Jeannette Whitmore; Wendelyn J. Shore; Christopher W. Robinson; Wallace E. Dixon
Abstract The effects of differing levels of word knowledge on infants’ sequential touching behaviors were investigated in two studies. In both, parent report was used to assess three levels of word knowledge: known, frontier, and unknown. In the first study, 14-month-old infants sequentially touched objects consistent with parents’ reports of their word knowledge. In the second study, 20-month-old infants sequentially touched objects by both conceptual category and reported level of word knowledge. It appears that even infants, like adults, can make distinctions among objects on the basis of their knowledge about the objects’ labels.
Acta Psychologica | 2018
Wesley R. Barnhart; Samuel Rivera; Christopher W. Robinson
The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed.
Psychology and Aging | 2018
Jessica Parker; Christopher W. Robinson
The study examined individual contributions of visual and auditory information on multisensory integration across the life span. In the experiment, children, young adults, and older adults participated in a variant of the Sound-Induced Flash Illusion where participants had to either ignore beeps and report how many flashes they saw or ignore flashes and report how many beeps they heard. Collapsed across age, auditory input had a stronger effect on visual processing than vice versa. However, relative contributions of auditory and visual information interacted with age, with young adults showing evidence of auditory dominance (only auditory input affected visual processing), whereas, multisensory integration effects were more symmetrical in children and older adults. The findings have implications for many tasks that require the processing of multisensory information.
Frontiers in Psychology | 2018
Wesley R. Barnhart; Samuel Rivera; Christopher W. Robinson
Effects of linguistic labels on learning outcomes are well-established; however, developmental research examining possible mechanisms underlying these effects have provided mixed results. We used a novel paradigm where 8-year-olds and adults were simultaneously trained on three sparse categories (categories with many irrelevant or unique features and a single rule defining feature). Category members were either associated with the same label, different labels, or no labels (silent baseline). Similar to infant paradigms, participants passively viewed individual exemplars and we examined fixations to category relevant features across training. While it is well established that adults can optimize their attention in forced-choice categorization tasks without linguistic input, the present findings provide support for label induced attention optimization: simply hearing the same label associated with different exemplars was associated with increased attention to category relevant features over time, and participants continued to focus on these features on a subsequent recognition task. Participants also viewed images longer and made more fixations when images were paired with unique labels. These findings provide support for the claim that labels may facilitate categorization by directing attention to category relevant features.
Child Development | 2004
Christopher W. Robinson; Vladimir M. Sloutsky