Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dwight Kravitz is active.

Publication


Featured researches published by Dwight Kravitz.


The Journal of Neuroscience | 2015

A Retinotopic Basis for the Division of High-Level Scene Processing between Lateral and Ventral Human Occipitotemporal Cortex

Edward Silson; Annie Wai Yiu Chan; Richard Reynolds; Dwight Kravitz; Chris I. Baker

In humans, there is a repeated category-selective organization across the lateral and ventral surfaces of the occipitotemporal cortex. This apparent redundancy is often explained as a feedforward hierarchy, with processing within lateral areas preceding the processing within ventral areas. Here, we tested the alternative hypothesis that this structure better reflects distinct high-level representations of the upper (ventral surface) and lower (lateral surface) contralateral quadrants of the visual field, consistent with anatomical projections from early visual areas to these surfaces in monkey. Using complex natural scenes, we provide converging evidence from three independent functional imaging and behavioral studies. First, population receptive field mapping revealed strong biases for the contralateral upper and lower quadrant within the ventral and lateral scene-selective regions, respectively. Second, these same biases were observed in the position information available both in the magnitude and multivoxel response across these areas. Third, behavioral judgments of a scene property strongly represented within the ventral scene-selective area (open/closed), but not another equally salient property (manmade/natural), were more accurate in the upper than the lower field. Such differential representation of visual space poses a substantial challenge to the idea of a strictly hierarchical organization between lateral and ventral scene-selective regions. Moreover, such retinotopic biases seem to extend beyond these regions throughout both surfaces. Thus, the large-scale organization of high-level extrastriate cortex likely reflects the need for both specialized representations of particular categories and constraints from the structure of early vision. SIGNIFICANCE STATEMENT One of the most striking findings in fMRI has been the presence of matched category-selective regions on the lateral and ventral surfaces of human occipitotemporal cortex. Here, we focus on scene-selective regions and provide converging evidence for a retinotopic explanation of this organization. Specifically, we demonstrate that scene-selective regions exhibit strong biases for different portions of the visual field, with the lateral region representing the contralateral lower visual field and the ventral region the contralateral upper visual field. These biases are consistent with the retinotopy found in the early visual areas that lie directly antecedent to category-selective areas on both surfaces. Furthermore, these biases extend beyond scene-selective cortex and provide a retinotopic basis for the large-scale organization of occipitotemporal cortex.


Journal of Vision | 2016

Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex.

Edward Silson; Iris I. A. Groen; Dwight Kravitz; Chris I. Baker

The organization of human lateral occipitotemporal cortex (lOTC) has been characterized largely according to two distinct principles: retinotopy and category-selectivity. Whereas category-selective regions were originally thought to exist beyond retinotopic maps, recent evidence highlights overlap. Here, we combined detailed mapping of retinotopy, using population receptive fields (pRF), and category-selectivity to examine and contrast the retinotopic profiles of scene- (occipital place area, OPA), face- (occipital face area, OFA) and object- (lateral occipital cortex, LO) selective regions of lOTC. We observe striking differences in the relationship each region has to underlying retinotopy. Whereas OPA overlapped multiple retinotopic maps (including V3A, V3B, LO1, and LO2), and LO overlapped two maps (LO1 and LO2), OFA overlapped almost none. There appears no simple consistent relationship between category-selectivity and retinotopic maps, meaning category-selective regions are not constrained spatially to retinotopic map borders consistently. The multiple maps that overlap OPA suggests it is likely not appropriate to conceptualize it as a single scene-selective region, whereas the inconsistency in any systematic map overlapping OFA suggests it may constitute a more uniform area. Beyond their relationship to retinotopy, all three regions evidenced strongly retinotopic voxels, with pRFs exhibiting a significant bias towards the contralateral lower visual field, despite differences in pRF size, contributing to an emerging literature suggesting this bias is present across much of lOTC. Taken together, these results suggest that whereas category-selective regions are not constrained to consistently contain ordered retinotopic maps, they nonetheless likely inherit retinotopic characteristics of the maps from which they draw information.


Current Biology | 2016

Neural Representations Integrate the Current Field of View with the Remembered 360° Panorama in Scene-Selective Cortex

Caroline E. Robertson; Katherine Hermann; Anna Mynick; Dwight Kravitz; Nancy Kanwisher

We experience our visual environment as a seamless, immersive panorama. Yet, each view is discrete and fleeting, separated by expansive eye movements and discontinuous views of our spatial surroundings. How are discrete views of a panoramic environment knit together into a broad, unified memory representation? Regions of the brains scene network are well poised to integrate retinal input and memory [1]: they are visually driven [2, 3] but also densely interconnected with memory structures in the medial temporal lobe [4]. Further, these regions harbor memory signals relevant for navigation [5-8] and adapt across overlapping shifts in scene viewpoint [9, 10]. However, it is unknown whether regions of the scene network support visual memory for the panoramic environment outside of the current field of view and, further, how memory for the surrounding environment influences ongoing perception. Here, we demonstrate that specific regions of the scene network-the retrosplenial complex (RSC) and occipital place area (OPA)-unite discrete views of axa0360° panoramic environment, both current and out of sight, in a common representational space. Further, individual scene views prime associated representations of the panoramic environment in behavior, facilitating subsequent perceptual judgments. We propose that this dynamic interplay between memory and perception plays an important role in weaving the fabric of continuous visual experience.


PLOS ONE | 2016

Differences in Looking at Own- and Other-Race Faces Are Subtle and Analysis-Dependent: An Account of Discrepant Reports

Joseph Arizpe; Dwight Kravitz; Vincent Walsh; Galit Yovel; Chris I. Baker

The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis.


Frontiers in Human Neuroscience | 2014

Holding a stick at both ends: on faces and expertise.

Assaf Harel; Dwight Kravitz; Chris I. Baker

Ever since Diamond and Careys (1986) seminal work, object expertise has often been viewed through the prism of face perception (for a thorough discussion, see Tanaka and Gauthier, 1997; Sheinberg and Tarr, 2010). According to Wong and Wong (2014, W&W), however, this emphasis has simply been a response to the question of modularity of face perception, and has not been about expertise in and of itself. It is precisely this conflation of questions of expertise and modularity, the consequent focus on FFA, and the detrimental effect this had on the field of object expertise research that we discussed as part of our original review (Harel et al., 2013). n nWe fully acknowledge that some recent works on visual expertise—particularly outside the domain of real world object recognition (the focus of our article)—have started to discuss object expertise beyond sensory cortex (e.g., Wong and Gauthier, 2010; Wong et al., 2012). However, at the same time, other high-profile works continue to focus on expertise solely in the context of FFA and face-selectivity (McGugin et al., 2012, 2014), arguing that their results are inconsistent with the notion that “learning effects are distributed throughout cortex with no relation to face selectivity” (McGugin et al., 2012, p. 17067). n nFocusing on discrete regions when the question is modularity, but focusing on distributed effects when the question is expertise itself, comes across as holding the stick at both ends and leads to the widespread misconception that FFA plays a privileged role in expertise. Of course, one can show that expertise effects occur within FFA while simultaneously acknowledging the widespread effects of expertise across the cortex. However, the significance of the former result to the understanding of object expertise is greatly reduced by the latter. Put simply, the more distributed expertise effects are, the less significant is the role of one particular region for our understanding of the general mechanisms of object expertise. Take for example, the widespread effects of car expertise, which includes even early visual cortex (Harel et al., 2013, Figure 2). Thus, a continued focus on the relationship between expertise and face processing detracts from the study of the general principles underlying real-world object expertise. n nBeyond the issue of modularity, WW Gauthier and Tarr, 2002; McCandliss et al., 2003; McGugin et al., 2011; Richler et al., 2011), and (ii) recent work explicitly testing the hypothesis that car expertise effects are invariant to modulations of attention or clutter (McGugin et al., 2014). In their response W&W suggest that experts “tend to automatically process their objects of expertise in a certain way” but those processes can be “overridden by higher-level cognitive processing.” It is unclear how an automatic process can be sometimes engaged and occasionally overridden. This comes across as another instance of holding the stick at both ends. n nDespite the points of contention we have highlighted here, we are encouraged that W&W fully agree with the distributed interactive view of visual expertise we discussed (Harel et al., 2010, 2013). We are certain that future research fully focused on addressing the distributed and highly interactive nature of visual expertise will provide new insights into the cortical mechanisms underlying real world object expertise.


Perception | 2017

Visual Search: You Are Who You Are (+ A Learning Curve):

Justin M. Ericson; Dwight Kravitz; Stephen R. Mitroff

Not everyone is equally well suited for every endeavor—individuals differ in their strengths and weaknesses, which makes some people better at performing some tasks than others. As such, it might be possible to predict individuals’ peak competence (i.e., ultimate level of success) on a given task based on their early performance in that task. The current study leveraged “big data” from the mobile game, Airport Scanner (Kedlin Company), to assess the possibility of predicting individuals’ ultimate visual search competency using the minimum possible unit of data: response time on a single visual search trial. Those who started out poorly were likely to stay relatively poor and those who started out strong were likely to remain top performers. This effect was apparent at the level of a single trial (in fact, the first trial), making it possible to use raw response time to predict later levels of success.


bioRxiv | 2018

Similarity judgments and cortical visual responses reflect different properties of object and scene categories in naturalistic images

Marcie L. King; Iris I. A. Groen; Adam Steel; Dwight Kravitz; Chris I. Baker

Numerous factors have been reported to underlie the representation of complex images in high-level human visual cortex, including categories (e.g. faces, objects, scenes), animacy, and real-world size, but the extent to which this organization is reflected in behavioral judgments of real-world stimuli is unclear. Here, we compared representations derived from explicit similarity judgments and ultra-high field (7T) fMRI of human visual cortex for multiple exemplars of a diverse set of naturalistic images from 48 object and scene categories. Behavioral judgements revealed a coarse division between man-made (including humans) and natural (including animals) images, with clear groupings of conceptually-related categories (e.g. transportation, animals), while these conceptual groupings were largely absent in the fMRI representations. Instead, fMRI responses tended to reflect a separation of both human and non-human faces/bodies from all other categories. This pattern yielded a statistically significant, but surprisingly limited correlation between the two representational spaces. Further, comparison of the behavioral and fMRI representational spaces with those derived from the layers of a deep neural network (DNN) showed a strong correspondence with behavior in the top-most layer and with fMRI in the mid-level layers. These results suggest that there is no simple mapping between responses in high-level visual cortex and behavior – each domain reflects different visual properties of the images and responses in high-level visual cortex may correspond to intermediate stages of processing between basic visual features and the conceptual categories that dominate the behavioral response. Significance Statement It is commonly assumed there is a correspondence between behavioral judgments of complex visual stimuli and the response of high-level visual cortex. We directly compared these representations across a diverse set of naturalistic object and scene categories and found a surprisingly and strikingly different representational structure. Further, both types of representation showed good correspondence with a deep neural network, but each correlated most strongly with different layers. These results show that behavioral judgments reflect more conceptual properties and visual cortical fMRI responses capture more general visual features. Collectively, our findings highlight that great care must be taken in mapping the response of visual cortex onto behavior, which clearly reflect different information.


The Journal of Neuroscience | 2018

Differential sampling of visual space in ventral and dorsal early visual cortex

Edward Silson; Richard Reynolds; Dwight Kravitz; Chris I. Baker

A fundamental feature of cortical visual processing is the separation of visual processing for the upper and lower visual fields. In early visual cortex (EVC), the upper visual field is processed ventrally, with the lower visual field processed dorsally. This distinction persists into several category-selective regions of occipitotemporal cortex, with ventral and lateral scene-, face-, and object-selective regions biased for the upper and lower visual fields, respectively. Here, using an elliptical population receptive field (pRF) model, we systematically tested the sampling of visual space within ventral and dorsal divisions of human EVC in both male and female participants. We found that (1) pRFs tend to be elliptical and oriented toward the fovea with distinct angular distributions for ventral and dorsal divisions of EVC, potentially reflecting a radial bias; and (2) pRFs in ventral areas were larger (∼1.5×) and more elliptical (∼1.2×) than those in dorsal areas. These differences potentially reflect a tendency for receptive fields in ventral temporal cortex to overlap the fovea with less emphasis on precise localization and isotropic representation of space compared with dorsal areas. Collectively, these findings suggest that ventral and dorsal divisions of EVC sample visual space differently, likely contributing to and/or stemming from the functional differentiation of visual processing observed in higher-level regions of the ventral and dorsal cortical visual pathways. SIGNIFICANCE STATEMENT The processing of visual information from the upper and lower visual fields is separated in visual cortex. Although ventral and dorsal divisions of early visual cortex (EVC) are commonly assumed to sample visual space equivalently, we demonstrate systematic differences using an elliptical population receptive field (pRF) model. Specifically, we demonstrate that (1) ventral and dorsal divisions of EVC exhibit diverging distributions of pRF angle, which are biased toward the fovea; and (2) ventral pRFs exhibit higher aspect ratios and cover larger areas than dorsal pRFs. These results suggest that ventral and dorsal divisions of EVC sample visual space differently and that such differential sampling likely contributes to different functional roles attributed to the ventral and dorsal pathways, such as object recognition and visually guided attention, respectively.


Journal of Vision | 2017

Lingering effects of response inhibition: Evidence for both control settings and memory association mechanisms

Rachel Wynn; Dwight Kravitz; Stephen R. Mitroff

All lags individually examined Response time and hit rate recorded Big Bomb • Never inhibited • 3.6% of overall trials, 92% accuracy • Examined after pistol air marshal and pistol search anchors Pistol • Inhibited during Air Marshal trials • 3.7% of overall trials, 92% accuracy • Examined after pistol air marshal and big bomb search anchors or Big Bomb Search (Baseline for pistol) Pistol Search (Baseline for big bomb) Air Marshal (Inhibition)


eNeuro | 2016

The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

Assaf Harel; Iris I. A. Groen; Dwight Kravitz; Leon Y. Deouell; Chris I. Baker

Collaboration


Dive into the Dwight Kravitz's collaboration.

Top Co-Authors

Avatar

Chris I. Baker

United States Department of Health and Human Services

View shared research outputs
Top Co-Authors

Avatar

Assaf Harel

Wright State University

View shared research outputs
Top Co-Authors

Avatar

Stephen R. Mitroff

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Edward Silson

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Iris I. A. Groen

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katherine Hermann

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Nancy Kanwisher

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge