Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Strother is active.

Publication


Featured researches published by Lars Strother.


Journal of Cognitive Neuroscience | 2014

Haptic shape processing in visual cortex

Jacqueline C. Snow; Lars Strother; Glyn W. Humphreys

Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.


Journal of Neurophysiology | 2010

Equal Degrees of Object Selectivity for Upper and Lower Visual Field Stimuli

Lars Strother; Adrian Aldcroft; Cheryl Lavell; Tutis Vilis

Functional MRI (fMRI) studies of the human object recognition system commonly identify object-selective cortical regions by comparing blood oxygen level-dependent (BOLD) responses to objects versus those to scrambled objects. Object selectivity distinguishes human lateral occipital cortex (LO) from earlier visual areas. Recent studies suggest that, in addition to being object selective, LO is retinotopically organized; LO represents both object and location information. Although LO responses to objects have been shown to depend on location, it is not known whether responses to scrambled objects vary similarly. This is important because it would suggest that the degree of object selectivity in LO does not vary with retinal stimulus position. We used a conventional functional localizer to identify human visual area LO by comparing BOLD responses to objects versus scrambled objects presented to either the upper (UVF) or lower (LVF) visual field. In agreement with recent findings, we found evidence of position-dependent responses to objects. However, we observed the same degree of position dependence for scrambled objects and thus object selectivity did not differ for UVF and LVF stimuli. We conclude that, in terms of BOLD response, LO discriminates objects from non-objects equally well in either visual field location, despite stronger responses to objects in the LVF.


PLOS ONE | 2016

Atypical Asymmetry for Processing Human and Robot Faces in Autism Revealed by fNIRS.

Corinne E. Jung; Lars Strother; David J. Feil-Seifer; Jeffrey J. Hutsler

Deficits in the visual processing of faces in autism spectrum disorder (ASD) individuals may be due to atypical brain organization and function. Studies assessing asymmetric brain function in ASD individuals have suggested that facial processing, which is known to be lateralized in neurotypical (NT) individuals, may be less lateralized in ASD. Here we used functional near-infrared spectroscopy (fNIRS) to first test this theory by comparing patterns of lateralized brain activity in homologous temporal-occipital facial processing regions during observation of faces in an ASD group and an NT group. As expected, the ASD participants showed reduced right hemisphere asymmetry for human faces, compared to the NT participants. Based on recent behavioral reports suggesting that robots can facilitate increased verbal interaction over human counterparts in ASD, we also measured responses to faces of robots to determine if these patterns of activation were lateralized in each group. In this exploratory test, both groups showed similar asymmetry patterns for the robot faces. Our findings confirm existing literature suggesting reduced asymmetry for human faces in ASD and provide a preliminary foundation for future testing of how the use of categorically different social stimuli in the clinical setting may be beneficial in this population.


Journal of Neuroscience Research | 2017

Sex differences in the human visual system

John Erik Vanston; Lars Strother

This Mini‐Review summarizes a wide range of sex differences in the human visual system, with a primary focus on sex differences in visual perception and its neural basis. We highlight sex differences in both basic and high‐level visual processing, with evidence from behavioral, neurophysiological, and neuroimaging studies. We argue that sex differences in human visual processing, no matter how small or subtle, support the view that females and males truly see the world differently. We acknowledge some of the controversy regarding sex differences in human vision and propose that such controversy should be interpreted as a source of motivation for continued efforts to assess the validity and reliability of published sex differences and for continued research on sex differences in human vision and the nervous system in general.


Frontiers in Human Neuroscience | 2015

The lemon illusion: seeing curvature where there is none

Lars Strother; Kyle Killebrew; Gideon Caplovitz

Curvature is a highly informative visual cue for shape perception and object recognition. We introduce a novel illusion—the Lemon Illusion—in which subtle illusory curvature is perceived along contour regions that are devoid of physical curvature. We offer several perceptual demonstrations and observations that lead us to conclude that the Lemon Illusion is an instance of a more general illusory curvature phenomenon, one in which the presence of contour curvature discontinuities lead to the erroneous extension of perceived curvature. We propose that this erroneous extension of perceived curvature results from the interaction of neural mechanisms that operate on spatially local contour curvature signals with higher-tier mechanisms that serve to establish more global representations of object shape. Our observations suggest that the Lemon Illusion stems from discontinuous curvature transitions between rectilinear and curved contour segments. However, the presence of curvature discontinuities is not sufficient to produce the Lemon Illusion, and the minimal conditions necessary to elicit this subtle and insidious illusion are difficult to pin down.


Attention Perception & Psychophysics | 2015

Spatiotemporal Form Integration: sequentially presented inducers can lead to representations of stationary and rigidly rotating objects.

J. Daniel McCarthy; Lars Strother; Gideon Caplovitz

Objects in the world often are occluded and in motion. The visible fragments of such objects are revealed at different times and locations in space. To form coherent representations of the surfaces of these objects, the visual system must integrate local form information over space and time. We introduce a new illusion in which a rigidly rotating square is perceived on the basis of sequentially presented Pacman inducers. The illusion highlights two fundamental processes that allow us to perceive objects whose form features are revealed over time: Spatiotemporal Form Integration (STFI) and Position Updating. STFI refers to the spatial integration of persistent representations of local form features across time. Position updating of these persistent form representations allows them to be integrated into a rigid global motion percept. We describe three psychophysical experiments designed to identify spatial and temporal constraints that underlie these two processes and a fourth experiment that extends these findings to more ecologically valid stimuli. Our results indicate that although STFI can occur across relatively long delays between successive inducers (i.e., greater than 500 ms), position updating is limited to a more restricted temporal window (i.e., ~300 ms or less), and to a confined range of spatial (mis)alignment. These findings lend insight into the limits of mechanisms underlying the visual system’s capacity to integrate transient, piecemeal form information, and support coherent object representations in the ever-changing environment.


Frontiers in Human Neuroscience | 2015

The Dynamic Ebbinghaus: motion dynamics greatly enhance the classic contextual size illusion

Ryan E. B. Mruczek; Christopher Blair; Lars Strother; Gideon Caplovitz

The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically over time. Under these conditions, the size of the central circle is perceived to change in opposition with the size of the inducers. Interestingly, this illusory effect is relatively weak when participants are fixating a stationary central target, less than half the magnitude of the classic static illusion. However, when the entire stimulus translates in space requiring a smooth pursuit eye movement to track the target, the illusory effect is greatly enhanced, almost twice the magnitude of the classic static illusion. A variety of manipulations including target motion, peripheral viewing, and smooth pursuit eye movements all lead to dramatic illusory effects, with the largest effect nearly four times the strength of the classic static illusion. We interpret these results in light of the fact that motion-related manipulations lead to uncertainty in the image size representation of the target, specifically due to added noise at the level of the retinal input. We propose that the neural circuits integrating visual cues for size perception, such as retinal image size, perceived distance, and various contextual factors, weight each cue according to the level of noise or uncertainty in their neural representation. Thus, more weight is given to the influence of contextual information in deriving perceived size in the presence of stimulus and eye motion. Biologically plausible models of size perception should be able to account for the reweighting of different visual cues under varying levels of certainty.


Frontiers in Psychology | 2014

Inter-element orientation and distance influence the duration of persistent contour integration

Lars Strother; Danila Alferov

Contour integration is a fundamental form of perceptual organization. We introduce a new method of studying the mechanisms responsible for contour integration. This method capitalizes on the perceptual persistence of contours under conditions of impending camouflage. Observers viewed arrays of randomly arranged line segments upon which circular contours comprised of similar line segments were superimposed via abrupt onset. Crucially, these contours remained visible for up to a few seconds following onset, but eventually disappeared due to the camouflaging effects of surrounding background line segments. Our main finding was that the duration of contour visibility depended on the distance and degree of co-alignment between adjacent contour segments such that relatively dense smooth contours persisted longest. The stimulus-related effects reported here parallel similar results from contour detection studies, and complement previous reported top–down influences on contour persistence (Strother et al., 2011). We propose that persistent contour visibility reflects the sustained activity of recurrent processing loops within and between visual cortical areas involved in contour integration and other important stages of visual object recognition.


Neuropsychologia | 2017

An fMRI study of visual hemifield integration and cerebral lateralization

Lars Strother; Zhiheng Zhou; Alexandra Coros; Tutis Vilis

&NA; The human brain integrates hemifield‐split visual information via interhemispheric transfer. The degree to which neural circuits involved in this process behave differently during word recognition as compared to object recognition is not known. Evidence from neuroimaging (fMRI) suggests that interhemispheric transfer during word viewing converges in the left hemisphere, in two distinct brain areas, an “occipital word form area” (OWFA) and a more anterior occipitotemporal “visual word form area” (VWFA). We used a novel fMRI half‐field repetition technique to test whether or not these areas also integrate nonverbal hemifield‐split string stimuli of similar visual complexity. We found that the fMRI responses of both the OWFA and VWFA while viewing nonverbal stimuli were strikingly different than those measured during word viewing, especially with respect to half‐stimulus changes restricted to a single hemifield. We conclude that normal reading relies on left‐lateralized neural mechanisms, which integrate hemifield‐split visual information for words but not for nonverbal stimuli. HighlightsResults are reported from an fMRI half‐field repetition paradigm.Left and right hemispheres show distinct patterns of repetition suppression.Half‐field suppression is different for words and non‐verbal stimuli.An occipital word form area (OWFA) underlies split‐word binding.


Psychonomic Bulletin & Review | 2018

Visual recognition of mirrored letters and the right hemisphere advantage for mirror-invariant object recognition

Matthew Harrison; Lars Strother

Unlike most objects, letter recognition is closely tied to orientation and mirroring, which in some cases (e.g., b and d), defines letter identity altogether. We combined a divided field paradigm with a negative priming procedure to examine the relationship between mirror generalization, its suppression during letter recognition, and language-related visual processing in the left hemisphere. In our main experiment, observers performed a centrally viewed letter-recognition task, followed by an object-recognition task performed in either the right or the left visual hemifield. The results show clear evidence of inhibition of mirror generalization for objects viewed in either hemifield but a right hemisphere advantage for visual recognition of mirrored and repeated objects. Our findings are consistent with an opponent relationship between symmetry-related visual processing in the right hemisphere and neurally recycled mechanisms in the left hemisphere used for visual processing of written language stimuli.

Collaboration


Dive into the Lars Strother's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tutis Vilis

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Nah

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marlene Behrmann

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge