Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dwight J. Kravitz is active.

Publication


Featured researches published by Dwight J. Kravitz.


The Journal of Neuroscience | 2011

Real-World Scene Representations in High-Level Visual Cortex: It's the Spaces More Than the Places

Dwight J. Kravitz; Cynthia S. Peng; Chris I. Baker

Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. In humans, the ventral temporal parahippocampal place area (PPA) has been implicated in scene processing, but scene information is contained in many visual areas, leaving their specific contributions unclear. Although early theories of PPA emphasized its role in spatial processing, more recent reports of its function have emphasized semantic or contextual processing. Here, using functional imaging, we reconstructed the organization of scene representations across human ventral visual cortex by analyzing the distributed response to 96 diverse real-world scenes. We found that, although individual scenes could be decoded in both PPA and early visual cortex (EVC), the structure of representations in these regions was vastly different. In both regions, spatial rather than semantic factors defined the structure of representations. However, in PPA, representations were defined primarily by the spatial factor of expanse (open, closed) and in EVC primarily by distance (near, far). Furthermore, independent behavioral ratings of expanse and distance correlated strongly with representations in PPA and peripheral EVC, respectively. In neither region was content (manmade, natural) a major contributor to the overall organization. Furthermore, the response of PPA could not be used to decode the high-level semantic category of scenes even when spatial factors were held constant, nor could category be decoded across different distances. These findings demonstrate, contrary to recent reports, that the response PPA primarily reflects spatial, not categorical or contextual, aspects of real-world scenes.


Cerebral Cortex | 2010

High-Level Visual Object Representations Are Constrained by Position

Dwight J. Kravitz; Nikolaus Kriegeskorte; Chris I. Baker

It is widely assumed that high-level visual object representations are position-independent (or invariant). While there is sensitivity to position in high-level object-selective cortex, position and object identity are thought to be encoded independently in the population response such that position information is available across objects and object information is available across positions. Contrary to this view, we show, with both behavior and neuroimaging, that visual object representations are position-dependent (tied to limited portions of the visual field). Behaviorally, we show that the effect of priming an object was greatly reduced with any change in position (within- or between-hemifields), indicating nonoverlapping representations of the same object across different positions. Furthermore, using neuroimaging, we show that object-selective cortex is not only highly sensitive to object position but also the ability to differentiate objects based on its response is greatly reduced across different positions, consistent with the observed behavior and the receptive field properties observed in macaque object-selective neurons. Thus, even at the population level, the object information available in response of object-selective cortex is constrained by position. We conclude that even high-level visual object representations are position-dependent.


Trends in Cognitive Sciences | 2008

How position dependent is visual object recognition

Dwight J. Kravitz; Latrice Vinson; Chris I. Baker

Visual object recognition is often assumed to be insensitive to changes in retinal position, leading to theories and formal models incorporating position-independent object representations. However, recent behavioral and physiological evidence has questioned the extent to which object recognition is position independent. Here, we take a computational and physiological perspective to review the current behavioral literature. Although numerous studies report reduced object recognition performance with translation, even for distances as small as 0.5 degrees of visual angle, confounds in many of these studies make the results difficult to interpret. We conclude that there is little evidence to support position-independent object recognition and the precise role of position in object recognition remains unknown.


Nature Neuroscience | 2013

Goal-dependent dissociation of visual and prefrontal cortices during working memory.

Sue-Hyun Lee; Dwight J. Kravitz; Chris I. Baker

To determine the specific contribution of brain regions to working memory, human participants performed two distinct tasks on the same visually presented objects. During the maintenance of visual properties, object identity could be decoded from extrastriate, but not prefrontal, cortex, whereas the opposite held for nonvisual properties. Thus, the ability to maintain information during working memory is a general and flexible cortical property, with the role of individual regions being goal-dependent.


Cerebral Cortex | 2013

Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information

Assaf Harel; Dwight J. Kravitz; Chris I. Baker

Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Task context impacts visual object processing differentially across the cortex

Assaf Harel; Dwight J. Kravitz; Chris I. Baker

Significance Visual recognition is often thought to depend on neural representations that primarily reflect the physical properties of the environment. However, in this study we demonstrate that the intent of the observer fundamentally perturbs cortical representations of visual objects. Using functional MRI we measured the patterns of response to identical objects under six different tasks. In any given task, these patterns could be used to distinguish which object was being viewed. However, this ability was disrupted when the task changed, indicating that object representations reflect not only the physical properties of the stimulus, but also the internal state of the observer. Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations.


Nature Neuroscience | 2010

Cortical representations of bodies and faces are strongest in commonly experienced configurations

Annie W. Chan; Dwight J. Kravitz; Sandra Truong; Joseph Arizpe; Chris I. Baker

Faces and bodies are perhaps the most salient and evolutionarily important visual stimuli. Using human functional imaging, we found that the strength of face and body representations depends on long-term experience. Representations were strongest for stimuli in their typical combinations of visual field and side (for example, left field, right body), although all conditions were simply reflections and translations of one another. Thus, high-level representations reflect the statistics with which stimuli occur.


The Journal of Neuroscience | 2013

Slower rate of binocular rivalry in autism.

Caroline E. Robertson; Dwight J. Kravitz; Jan Freyberg; Simon Baron-Cohen; Chris I. Baker

An imbalance between cortical excitation and inhibition is a central component of many models of autistic neurobiology. We tested a potential behavioral footprint of this proposed imbalance using binocular rivalry, a visual phenomenon in which perceptual experience is thought to mirror the push and pull of excitatory and inhibitory cortical dynamics. In binocular rivalry, two monocularly presented images compete, leading to a percept that alternates between them. In a series of trials, we presented separate images of objects (e.g., a baseball and a broccoli) to each eye using a mirror stereoscope and asked human participants with autism and matched control subjects to continuously report which object they perceived, or whether they perceived a mixed percept. Individuals with autism demonstrated a slower rate of binocular rivalry alternations than matched control subjects, with longer durations of mixed percepts and an increased likelihood to revert to the previously perceived object when exiting a mixed percept. Critically, each of these findings was highly predictive of clinical measures of autistic symptomatology. Control “playback” experiments demonstrated that differences in neither response latencies nor response criteria could account for the atypical dynamics of binocular rivalry we observed in autistic spectrum conditions. Overall, these results may provide an index of atypical cortical dynamics that may underlie both the social and nonsocial symptoms of autism.


PLOS ONE | 2012

Start Position Strongly Influences Fixation Patterns during Face Processing: Difficulties with Eye Movements as a Measure of Information Use

Joseph Arizpe; Dwight J. Kravitz; Galit Yovel; Chris I. Baker

Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale.


The Journal of Neuroscience | 2013

Tunnel vision: sharper gradient of spatial attention in autism.

Caroline E. Robertson; Dwight J. Kravitz; Jan Freyberg; Simon Baron-Cohen; Chris I. Baker

Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures—visual acuity, contrast discrimination, and flicker detection—is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of “tunnel vision” in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.

Collaboration


Dive into the Dwight J. Kravitz's collaboration.

Top Co-Authors

Avatar

Chris I. Baker

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Assaf Harel

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Arizpe

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Marlene Behrmann

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sue-Hyun Lee

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Alex Martin

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Cibu Thomas

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Cynthia S. Peng

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge