Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack Ryan is active.

Publication


Featured researches published by Jack Ryan.


Nature Neuroscience | 2014

Anchoring the neural compass: coding of local spatial reference frames in human medial parietal lobe.

Steven A. Marchette; Lindsay K. Vass; Jack Ryan; Russell A. Epstein

The neural systems that code for location and facing direction during spatial navigation have been investigated extensively; however, the mechanisms by which these quantities are referenced to external features of the world are not well understood. To address this issue, we examined behavioral priming and functional magnetic resonance imaging activity patterns while human subjects recalled spatial views from a recently learned virtual environment. Behavioral results indicated that imagined location and facing direction were represented during this task, and multivoxel pattern analyses indicated that the retrosplenial complex (RSC) was the anatomical locus of these spatial codes. Critically, in both cases, location and direction were defined on the basis of fixed elements of the local environment and generalized across geometrically similar local environments. These results suggest that RSC anchors internal spatial representations to local topographical features, thus allowing us to stay oriented while we navigate and retrieve from memory the experience of being in a particular place.


The Journal of Neuroscience | 2015

Outside Looking In: Landmark Generalization in the Human Navigational System

Steven A. Marchette; Lindsay K. Vass; Jack Ryan; Russell A. Epstein

The use of landmarks is central to many navigational strategies. Here we use multivoxel pattern analysis of fMRI data to understand how landmarks are coded in the human brain. Subjects were scanned while viewing the interiors and exteriors of campus buildings. Despite their visual dissimilarity, interiors and exteriors corresponding to the same building elicited similar activity patterns in the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA), three regions known to respond strongly to scenes and buildings. Generalization across stimuli depended on knowing the correspondences among them in the PPA but not in the other two regions, suggesting that the PPA is the key region involved in learning the different perceptual instantiations of a landmark. In contrast, generalization depended on the ability to freely retrieve information from memory in RSC, and it did not depend on familiarity or cognitive task in OPA. Together, these results suggest a tripartite division of labor, whereby PPA codes landmark identity, RSC retrieves spatial or conceptual information associated with landmarks, and OPA processes visual features that are important for landmark recognition. SIGNIFICANCE STATEMENT A central element of spatial navigation is the ability to recognize the landmarks that mark different places in the world. However, little is known about how the brain performs this function. Here we show that the parahippocampal place area (PPA), a region in human occipitotemporal cortex, exhibits key features of a landmark recognition mechanism. Specifically, the PPA treats different perceptual instantiations of the same landmark as representationally similar, but only when subjects have enough experience to know the correspondences among the stimuli. We also identify two other brain regions that exhibit landmark generalization, but with less sensitivity to familiarity. These results elucidate the brain networks involved in the learning and recognition of navigational landmarks.


Cerebral Cortex | 2016

Coding of Object Size and Object Category in Human Visual Cortex

Joshua B. Julian; Jack Ryan; Russell A. Epstein

A salient aspect of objects is their real-world size. Large objects tend to be fixed in the world and can act as navigational barriers and landmarks, whereas small objects tend to be moveable and manipulable. Previous work has identified regions of visual cortex that respond differentially to large versus small objects, but the role of size in organizing representations of object categories has not been fully explored. To address this issue, we scanned subjects while they viewed large and small objects drawn from 20 categories, with retinotopic extent equated across size classes. Univariate analyses replicated previous results showing a greater response to large than small objects in scene-responsive regions and the converse effect in the left occipitotemporal sulcus. Critically, multivariate analyses revealed organization-by-size both within and across functional regions, as evidenced by activation patterns that were more similar for object categories of the same size than for object categories of different size. This effect was observed in both scene- and object-responsive regions and across high-level visual cortex as a whole, but not in early visual cortex. We hypothesize that real-world size is an important dimension for object category organization because of the many ecologically significant differences between large and small objects.


Proceedings of the National Academy of Sciences of the United States of America | 2017

The human visual cortex response to melanopsin-directed stimulation is accompanied by a distinct perceptual experience.

Manuel Spitschan; Andrew S. Bock; Jack Ryan; Giulia Frazzetta; David H. Brainard; Geoffrey K. Aguirre

Significance Melanopsin-containing retinal cells detect bright light and contribute to reflex visual responses such as pupil constriction. Their role in conscious, cortical vision is less understood. Using functional MRI to measure brain activity, we find that melanopsin-directed stimulation reaches the visual cortex in people. Such stimulation also produces a distinct perceptual experience. Our results have clinical importance as melanopsin function may contribute to the discomfort that some people experience from bright light. The photopigment melanopsin supports reflexive visual functions in people, such as pupil constriction and circadian photoentrainment. What contribution melanopsin makes to conscious visual perception is less studied. We devised a stimulus that targeted melanopsin separately from the cones using pulsed (3-s) spectral modulations around a photopic background. Pupillometry confirmed that the melanopsin stimulus evokes a response different from that produced by cone stimulation. In each of four subjects, a functional MRI response in area V1 was found. This response scaled with melanopic contrast and was not easily explained by imprecision in the silencing of the cones. Twenty additional subjects then observed melanopsin pulses and provided a structured rating of the perceptual experience. Melanopsin stimulation was described as an unpleasant, blurry, minimal brightening that quickly faded. We conclude that isolated stimulation of melanopsin is likely associated with a response within the cortical visual pathway and with an evoked conscious percept.


Journal of Vision | 2015

The Occipital Place Area is causally involved in representing environmental boundaries during navigation

Joshua B. Julian; Jack Ryan; Roy H. Hamilton; Russell A. Epstein

Previous work indicates that learning of spatial locations relative to environmental boundaries and the learning of spatial locations relative to discrete landmarks are dissociable processes supported by different neural systems (Wang & Spelke, 2002; Doeller & Burgess, 2008). However, the perceptual systems that provide the inputs to these learning mechanisms are not well understood. We hypothesized that the Occipital Place Area (OPA), a scene-selective region located near the transverse occipital sulcus, might play a critical role in boundary-based learning, by extracting boundary information from visual scenes during navigation. To test this idea, we used transcranial magnetic stimulation (TMS) to interrupt processing in the OPA while subjects performed a virtual navigation task that required them to learn locations of test objects relative to boundaries and landmarks. The environment consisted of a circular chamber, which was limited by a boundary wall, contained a rotationally symmetric landmark object, and was surrounded by distal cues for orientation (mountains, rendered at infinity). The relative position of the landmark and boundary changed across testing blocks. Test objects were tethered to either the landmark or the boundary, thus allowing learning of object location relative to each cue to be independently assessed. Prior to each block, transcranial magnetic continuous theta burst stimulation (cTBS) was applied to either the functionally-defined right OPA or a Vertex control site. Consistent with our prediction, we found that cTBS to the OPA impaired learning of object locations relative to boundaries, but not relative to landmarks. These results provide the first evidence that OPA is causally involved in visually-guided navigation. Moreover, they indicate that the OPA is essential for the coding of environmental boundary information. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Neural coding of navigational affordances in the local visual environment

Michael F. Bonner; Jack Ryan; Russell A. Epstein

An essential component of visually guided navigation is the ability to perceive features of the environment that afford or constrain movement. For example, in indoor environments, walls limit ones potential routes, while passageways facilitate movement. Here we attempt to identify the cortical mechanisms that encode such navigational features. Specifically, we test the hypothesis that scene-selective regions of the human brain represent navigational affordances in visual scenes. In an fMRI experiment, subjects viewed images of artificially rendered rooms that had identical geometry as defined by their walls, but varied on the number (one to three) and position (left, right, center) of spatial passageways (i.e., open doorways) connected to them. Thus, the layout of these passageways defined the navigable space in each scene. Several versions of each layout were shown, each with the same set of passageways but different textures on the walls and floors. Furthermore, half the rooms were empty except for the walls and passageways, while the other half included visual clutter in the form of paintings along the walls, which were similar in size and shape to the passageways. Images were presented for 2 seconds while subjects maintained central fixation and performed an unrelated color-discrimination task on two dots overlaid on each scene. Using multivoxel-pattern analysis, we sought to identify representations of navigational layout that were invariant to other visual properties of the images. This analysis revealed consistent and invariant coding of navigational layout in the occipital place area (OPA), a scene-selective region near the transverse occipital sulcus. In this region, scenes elicited similar activation patterns if they had similar navigational layout, independent of changes in texture or the presence of visual clutter (i.e., paintings on the walls). These findings suggest that the OPA encodes representations of navigational affordances that could be used to guide movement through local space. Meeting abstract presented at VSS 2015.


Nature Neuroscience | 2015

Corrigendum: Anchoring the neural compass: coding of local spatial reference frames in human medial parietal lobe

Steven A. Marchette; Lindsay K. Vass; Jack Ryan; Russell A. Epstein

Corrigendum: Anchoring the neural compass: coding of local spatial reference frames in human medial parietal lobe


Journal of Vision | 2015

Coding of object size and object category in scene regions

Jack Ryan; Joshua B. Julian; Russell A. Epstein

Recent work suggests that scene-responsive regions such as the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA) might play a role in processing of nonscene objects in addition to their role in processing scenes. For example, Konkle & Oliva (2012) reported that these regions respond more strongly to large objects than to small objects. Does this indicate that scene-responsive regions encode objects in terms of their real-world size--or, alternatively, preferentially encode large objects? To investigate this issue, we scanned subjects while they viewed objects drawn from 20 categories, 10 of which were physically large (e.g. stove, copier) and 10 of which were physically small (e.g. binoculars, mug). Objects were shown for 1 s each with a 2 s ISI. Large and small categories were matched in terms of a number of low-level image properties, including retinotopic size, chrominance, luminance, and spatial frequency. Preliminary results replicated the previously reported univariate effect: PPA, RSC, and OPA all responded more strongly to big objects than to small objects. Moreover, this finding was echoed in multivoxel pattern analyses: pattern similarity in these regions was greater for object categories of similar size (e.g. stove-copier) than for object categories of different size (e.g. stove-mug). Notably, we did not observe evidence for coding of object category in these regions when object size was controlled. That is, across scan runs, multi-voxel patterns were no more similar for objects of the same category and size than for objects of different category but the same size. In contrast, object category could be readily decoded in the lateral occipital complex. These results suggest that scene regions code spatial properties of objects useful for navigation but are not centrally involved in object recognition. Meeting abstract presented at VSS 2015.


Current Biology | 2016

The Occipital Place Area Is Causally Involved in Representing Environmental Boundaries during Navigation

Joshua B. Julian; Jack Ryan; Roy H. Hamilton; Russell A. Epstein


Cognition | 2017

Schematic representations of local environmental space guide goal-directed navigation.

Steven A. Marchette; Jack Ryan; Russell A. Epstein

Collaboration


Dive into the Jack Ryan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua B. Julian

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Lindsay K. Vass

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Roy H. Hamilton

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David H. Brainard

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manuel Spitschan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael F. Bonner

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Alex T. Keinath

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge