Joshua B. Julian
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joshua B. Julian.
The Journal of Neuroscience | 2013
Daniel D. Dilks; Joshua B. Julian; Alexander M. Paunov; Nancy Kanwisher
Functional magnetic resonance imaging has revealed a set of regions selectively engaged in visual scene processing: the parahippocampal place area (PPA), the retrosplenial complex (RSC), and a region around the transverse occipital sulcus (previously known as “TOS”), here renamed the “occipital place area” (OPA). Are these regions not only preferentially activated by, but also causally involved in scene perception? Although past neuropsychological data imply a causal role in scene processing for PPA and RSC, no such evidence exists for OPA. Thus, to test the causal role of OPA in human adults, we delivered transcranial magnetic stimulation (TMS) to the right OPA (rOPA) or the nearby face-selective right occipital face area (rOFA) while participants performed fine-grained perceptual discrimination tasks on scenes or faces. TMS over rOPA impaired discrimination of scenes but not faces, while TMS over rOFA impaired discrimination of faces but not scenes. In a second experiment, we delivered TMS to rOPA, or the object-selective right lateral occipital complex (rLOC), while participants performed categorization tasks involving scenes and objects. TMS over rOPA impaired categorization accuracy of scenes but not objects, while TMS over rLOC impaired categorization accuracy of objects but not scenes. These findings provide the first evidence that OPA is causally involved in scene processing, and further show that this causal role is selective for scene perception. Our findings illuminate the functional architecture of the scene perception system, and also argue against the “distributed coding” view in which each category-selective region participates in the representation of all objects.
NeuroImage | 2012
Joshua B. Julian; Evelina Fedorenko; Jason Webster; Nancy Kanwisher
In a widely used functional magnetic resonance imaging (fMRI) data analysis method, functional regions of interest (fROIs) are handpicked in each participant using macroanatomic landmarks as guides, and the response of these regions to new conditions is then measured. A key limitation of this standard handpicked fROI method is the subjectivity of decisions about which clusters of activated voxels should be treated as the particular fROI in question in each subject. Here we apply the Group-Constrained Subject-Specific (GSS) method for defining fROIs, recently developed for identifying language fROIs (Fedorenko et al., 2010), to algorithmically identify fourteen well-studied category-selective regions of the ventral visual pathway (Kanwisher, 2010). We show that this method retains the benefit of defining fROIs in individual subjects without the subjectivity inherent in the traditional handpicked fROI approach. The tools necessary for using this method are available on our website (http://web.mit.edu/bcs/nklab/GSS.shtml).
The Journal of Neuroscience | 2011
Daniel D. Dilks; Joshua B. Julian; Jonas Kubilius; Elizabeth S. Spelke; Nancy Kanwisher
Electrophysiological and behavioral studies in many species have demonstrated mirror-image confusion for objects, perhaps because many objects are vertically symmetric (e.g., a cup is the same cup when seen in left or right profile). In contrast, the navigability of a scene changes when it is mirror reversed, and behavioral studies reveal high sensitivity to this change. Thus, we predicted that representations in object-selective cortex will be unaffected by mirror reversals, whereas representations in scene-selective cortex will be sensitive to such reversals. To test this hypothesis, we ran an event-related functional magnetic resonance imaging adaptation experiment in human adults. Consistent with our prediction, we found tolerance to mirror reversals in one object-selective region, the posterior fusiform sulcus, and a strong sensitivity to these reversals in two scene-selective regions, the transverse occipital sulcus and the retrosplenial complex. However, a more posterior object-selective region, the lateral occipital sulcus, showed sensitivity to mirror reversals, suggesting that the sense information that distinguishes mirror images is represented at earlier stages in the object-processing hierarchy. Moreover, one scene-selective region (the parahippocampal place area or PPA) was tolerant to mirror reversals. This last finding challenges the hypothesis that the PPA is involved in navigation and reorientation and suggests instead that scenes, like objects, are processed by distinct pathways guiding recognition and action.
Proceedings of the National Academy of Sciences of the United States of America | 2014
Kami Koldewyn; Anastasia Yendiki; Sarah Weigelt; Hyowon Gweon; Joshua B. Julian; Hilary Richardson; Caitlin Malloy; Rebecca Saxe; Bruce Fischl; Nancy Kanwisher
Significance One of the most accepted brain “signatures” of autism spectrum disorder (ASD) is a reduction in the integrity of long-range white-matter fiber tracts. Here, we assessed known white matter tracts in children with ASD by using diffusion-weighted imaging. In contrast to most prior studies, we carefully matched for head motion between groups. When data quality was matched, there was no evidence of widespread changes in white-matter tracts in the ASD group. Instead, differences were present in only one tract, the right inferior longitudinal fasciculus. These data challenge the idea that widespread changes in white-matter integrity are a signature of ASD and highlight the importance of matching for data quality in future diffusion studies of ASD and other clinical disorders. One of the most widely cited features of the neural phenotype of autism is reduced “integrity” of long-range white matter tracts, a claim based primarily on diffusion imaging studies. However, many prior studies have small sample sizes and/or fail to address differences in data quality between those with autism spectrum disorder (ASD) and typical participants, and there is little consensus on which tracts are affected. To overcome these problems, we scanned a large sample of children with autism (n = 52) and typically developing children (n = 73). Data quality was variable, and worse in the ASD group, with some scans unusable because of head motion artifacts. When we follow standard data analysis practices (i.e., without matching head motion between groups), we replicate the finding of lower fractional anisotropy (FA) in multiple white matter tracts. However, when we carefully match data quality between groups, all these effects disappear except in one tract, the right inferior longitudinal fasciculus (ILF). Additional analyses showed the expected developmental increases in the FA of fiber tracts within ASD and typical groups individually, demonstrating that we had sufficient statistical power to detect known group differences. Our data challenge the widely claimed general disruption of white matter tracts in autism, instead implicating only one tract, the right ILF, in the ASD phenotype.
Optometry and Vision Science | 2014
Daniel D. Dilks; Joshua B. Julian; Eli Peli; Nancy Kanwisher
Purpose When individuals with central vision loss due to macular degeneration (MD) view stimuli in the periphery, most of them activate the region of retinotopic cortex normally activated only by foveal stimuli—a process often referred to as reorganization. Why do some show this reorganization of visual processing whereas others do not? We reported previously that six individuals with complete bilateral loss of central vision showed such reorganization, whereas two with bilateral central vision loss but with foveal sparing did not, and we hypothesized that the effect occurs only after complete bilateral loss of foveal vision. Here, we conduct a stronger test of the dependence of reorganization of visual processing in MD on complete loss of foveal function, by bringing back one (called MD6) of the two participants who previously did not show reorganization and who showed foveal sparing. MD6 has now lost all foveal function, and we predicted that if large-scale reorganization of visual processing in MD individuals depends on complete loss of foveal input, then we will now see such reorganization in this individual. Methods MD6 and two normally sighted control subjects were scanned. Stimuli were gray-scale photographs of objects presented at either the fovea or a peripheral retinal location (i.e., the MD participant’s preferred retinal locus or the control participants’ matched peripheral location). Results In MD6, visual stimulation at the preferred retinal locus significantly activated not only the expected “peripheral” retinotopic cortex but also the deprived “foveal” cortex. Crucially, MD6 exhibited no such large-scale reorganization 5 years earlier when she had some foveal sparing. By contrast, in the control participants, stimulation at the matched peripheral location produced significant activation in peripheral retinotopic cortex only. Conclusions We conclude that complete loss of foveal function may be a necessary condition for large-scale reorganization of visual processing in individuals with MD.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Joshua B. Julian; Alexander T. Keinath; Isabel A. Muzzio; Russell A. Epstein
Significance The ability to recover one’s bearings when lost is critical for successful navigation. To accomplish this feat, a navigator must identify its current location (place recognition), and it must also recover its facing direction (heading retrieval). Using a novel behavioral paradigm, we demonstrate that mice use one set of cues to determine their location and then ignore these same cues when determining their heading, although the cues are informative in both cases. These results suggest that place recognition and heading retrieval are mediated by different processing systems that operate in partial independence of each other. This finding has important implications for understanding the cognitive architecture underlying spatial navigation. A lost navigator must identify its current location and recover its facing direction to restore its bearings. We tested the idea that these two tasks—place recognition and heading retrieval—might be mediated by distinct cognitive systems in mice. Previous work has shown that numerous species, including young children and rodents, use the geometric shape of local space to regain their sense of direction after disorientation, often ignoring nongeometric cues even when they are informative. Notably, these experiments have almost always been performed in single-chamber environments in which there is no ambiguity about place identity. We examined the navigational behavior of mice in a two-chamber paradigm in which animals had to both recognize the chamber in which they were located (place recognition) and recover their facing direction within that chamber (heading retrieval). In two experiments, we found that mice used nongeometric features for place recognition, but simultaneously failed to use these same features for heading retrieval, instead relying exclusively on spatial geometry. These results suggest the existence of separate systems for place recognition and heading retrieval in mice that are differentially sensitive to geometric and nongeometric cues. We speculate that a similar cognitive architecture may underlie human navigational behavior.
Current Biology | 2017
Alex T. Keinath; Joshua B. Julian; Russell A. Epstein; Isabel A. Muzzio
When a navigators internal sense of direction is disrupted, she must rely on external cues to regain her bearings, a process termed spatial reorientation. Extensive research has demonstrated that the geometric shape of the environment exerts powerful control over reorientation behavior, but the neural and cognitive mechanisms underlying this phenomenon are not well understood. Whereas some theories claim that geometry controls behavior through an allocentric mechanism potentially tied to the hippocampus, others postulate that disoriented navigators reach their goals by using an egocentric view-matching strategy. To resolve this debate, we characterized hippocampal representations during reorientation. We first recorded from CA1 cells as disoriented mice foraged in chambers of various shapes. We found that the alignment of the recovered hippocampal map was determined by the geometry of the chamber, but not by nongeometric cues, even when these cues could be used to disambiguate geometric ambiguities. We then recorded hippocampal activity as disoriented mice performed a classical goal-directed spatial memory task in a rectangular chamber. Again, we found that the recovered hippocampal map aligned solely to the chamber geometry. Critically, we also found a strong correspondence between the hippocampal map alignment and the animals behavior, making it possible to predict the search location of the animal from neural responses on a trial-by-trial basis. Together, these results demonstrate that spatial reorientation involves the alignment of the hippocampal map to local geometry. We hypothesize that geometry may be an especially salient cue for reorientation because it is an inherently stable aspect of the environment.
NeuroImage | 2016
Frederik S. Kamps; Joshua B. Julian; Jonas Kubilius; Nancy Kanwisher; Daniel D. Dilks
Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties.
Nature Neuroscience | 2018
Joshua B. Julian; Alexandra T. Keinath; Giulia Frazzetta; Russell A. Epstein
When participants performed a visual search task, functional MRI responses in entorhinal cortex exhibited a sixfold periodic modulation by gaze-movement direction. The orientation of this modulation was determined by the shape and orientation of the bounded search space. These results indicate that human entorhinal cortex represents visual space using a boundary-anchored grid, analogous to that used by rodents to represent navigable space.The authors show that human entorhinal cortex supports a grid cell-like representation of visual space. This visual grid pattern is stably anchored to the external visual world in a fashion analogous to rodent grid representations of navigable space.
Frontiers in Human Neuroscience | 2016
Peter Bryan; Joshua B. Julian; Russell A. Epstein
The parahippocampal place area (PPA) is one of several brain regions that respond more strongly to scenes than to non-scene items such as objects and faces. The mechanism underlying this scene-preferential response remains unclear. One possibility is that the PPA is tuned to low-level stimulus features that are found more often in scenes than in less-preferred stimuli. Supporting this view, Nasr et al. (2014) recently observed that some of the stimuli that are known to strongly activate the PPA contain a large number of rectilinear edges. They further demonstrated that PPA response is modulated by rectilinearity for a range of non-scene images. Motivated by these results, we tested whether rectilinearity suffices to explain PPA selectivity for scenes. In the first experiment, we replicated the previous finding of modulation by rectilinearity in the PPA for arrays of 2-d shapes. However, two further experiments failed to find a rectilinearity effect for faces or scenes: high-rectilinearity faces and scenes did not activate the PPA any more strongly than low-rectilinearity faces and scenes. Moreover, the categorical advantage for scenes vs. faces was maintained in the PPA and two other scene-selective regions—the retrosplenial complex (RSC) and occipital place area (OPA)—when rectilinearity was matched between stimulus sets. We conclude that selectivity for scenes in the PPA cannot be explained by a preference for low-level rectilinear edges.