Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John A. Pyles is active.

Publication


Featured researches published by John A. Pyles.


PLOS ONE | 2013

Explicating the Face Perception Network with White Matter Connectivity

John A. Pyles; Timothy D. Verstynen; Walter W. Schneider; Michael J. Tarr

A network of multiple brain regions is recruited in face perception. Our understanding of the functional properties of this network can be facilitated by explicating the structural white matter connections that exist between its functional nodes. We accomplished this using functional MRI (fMRI) in combination with fiber tractography on high angular resolution diffusion weighted imaging data. We identified the three nodes of the core face network: the “occipital face area” (OFA), the “fusiform face area” (mid-fusiform gyrus or mFus), and the superior temporal sulcus (STS). Additionally, a region of the anterior temporal lobe (aIT), implicated as being important for face perception was identified. Our data suggest that we can further divide the OFA into multiple anatomically distinct clusters – a partitioning consistent with several recent neuroimaging results. More generally, structural white matter connectivity within this network revealed: 1) Connectivity between aIT and mFus, and between aIT and occipital regions, consistent with studies implicating this posterior to anterior pathway as critical to normal face processing; 2) Strong connectivity between mFus and each of the occipital face-selective regions, suggesting that these three areas may subserve different functional roles; 3) Almost no connectivity between STS and mFus, or between STS and the other face-selective regions. Overall, our findings suggest a re-evaluation of the “core” face network with respect to what functional areas are or are not included in this network.


Frontiers in Human Neuroscience | 2010

fMR-adaptation reveals invariant coding of biological motion on the human STS

Emily D. Grossman; Nicole L. Jardine; John A. Pyles

Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis.


Nature Communications | 2014

Dynamic encoding of face information in the human fusiform gyrus

Avniel Singh Ghuman; Nicolas M. Brunet; Yuanning Li; Roma O. Konecky; John A. Pyles; Shawn Walls; Vincent J. DeStefino; Wei Wang; R. Mark Richardson

Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.


Weather and Forecasting | 2007

The Effect of Probabilistic Information on Threshold Forecasts

Susan Joslyn; Karla Pak; David W. Jones; John A. Pyles; Earl Hunt

Abstract The study reported here asks whether the use of probabilistic information indicating forecast uncertainty improves the quality of deterministic weather decisions. Participants made realistic wind speed forecasts based on historical information in a controlled laboratory setting. They also decided whether it was appropriate to post an advisory for winds greater than 20 kt (10.29 m s−1) during the same time intervals and in the same geographic locations. On half of the forecasts each participant also read a color-coded chart showing the probability of winds greater than 20 kt. Participants had a general tendency to post too many advisories in the low probability situations (0%–10%) and too few advisories in very high probability situations (90%–100%). However, the probability product attenuated these biases. When participants used the probability product, they posted fewer advisories when the probability of high winds was low and they posted more advisories when the probability of high winds was hi...


Frontiers in Computational Neuroscience | 2014

Exploration of complex visual feature spaces for object perception

Daniel Leeds; John A. Pyles; Michael J. Tarr

The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each units selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation.


Journal of Vision | 2017

Exploring the spatio-temporal neural basis of face learning

Ying Yang; Yang Xu; Carol A. Jew; John A. Pyles; Robert E. Kass; Michael J. Tarr

Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150–250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150–250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces.


Frontiers in Psychology | 2013

Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

Yang Xu; Christopher D'Lauro; John A. Pyles; Robert E. Kass; Michael J. Tarr

Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning.


Cortex | 2016

Associative hallucinations result from stimulating left ventromedial temporal cortex.

Elissa Aminoff; Yuanning Li; John A. Pyles; Michael Ward; R. Mark Richardson; Avniel Singh Ghuman

Visual recognition requires connecting perceptual information with contextual information and existing knowledge. The ventromedial temporal cortex (VTC), including the medial fusiform, has been linked with object recognition, paired associate learning, contextual processing, and episodic memory, suggesting that this area may be critical in connecting visual processing, context, knowledge and experience. However, evidence for the link between associative processing, episodic memory, and visual recognition in VTC is currently lacking. Using electrocorticography (ECoG) in a single human patient, medial regions of the left VTC were found to be sensitive to the contextual associations of objects. Electrical brain stimulation (EBS) of this part of the left VTC of the patient, functionally defined as sensitive to associative processing, caused memory related, associative experiential visual phenomena. This provides evidence of a relationship between visual recognition, associative processing, and episodic memory. These results suggest a potential role for abnormalities of these processes as part of a mechanism that gives rise to some visual hallucinations.


Brain Research | 2012

Stimulus complexity modulates contrast response functions in the human middle temporal area (hMT

Javier O. Garcia; John A. Pyles; Emily D. Grossman

The brain systems that support motion perception are some of the most studied in the primate visual system, with apparent specialization in the middle temporal area (hMT+ in humans, MT or V5 in monkeys). Even with this specialization, it is safe to assume that the hMT+ interacts with other brain systems as visual tasks demand. Here we have measured those interactions using a specialized case of structure-from-motion, point-light biological motion. We have measured the BOLD-contrast response functions in hMT+ for translating and biological motion. Even after controlling for task and attention, we find the BOLD response for translating motion to be largely insensitive to contrast, but the BOLD response for biological motion to be strongly contrast dependent. To track the brain systems involved in these interactions, we probed for brain areas outside of the hMT+ with the same contrast dependent neural response. This analysis revealed brain systems known to support form perception (including ventral temporal cortex and the superior temporal sulcus). We conclude that the contrast dependent response in hMT+ likely reflects stimulus complexity, and may be evidence for interactions with shape-based brain systems.


Journal of Vision | 2015

White-matter connectivity of brain regions recruited during the perception of dynamic objects

John A. Pyles; Michael J. Tarr

Dynamic objects are ubiquitous in our visual environment. In previous work we identified a network of brain regions recruited during the perception of dynamic objects, and provided evidence that many regions within higher-level and retinotopic visual cortex encode invariant information about dynamic objects (Pyles & Tarr, 2013, 2014). However the structural connectivity of this network remains largely unknown. Here we combine fMRI with diffusion-weighted imaging and deterministic fiber-tracking to investigate the white-matter connectivity between these areas. Dynamic object-selective regions were identified with a new fMRI localizer using short animations of moving, articulating novel objects, contrasted with phase scrambled versions of the same animations. Subjects also participated in a diffusion spectrum imaging scan using a 257 direction sequence. During viewing of dynamic objects we observed the recruitment of large regions of occipito-temporal cortex, substantially overlapping with LOC and hMT+ (designated dynamic-LOC) as well as regions of parietal cortex. These functional regions as well as results from retinotopy and a MT/MST localizer were used as regions of interest for deterministic fiber-tracking to map the white matter connections between these areas. Additionally we investigated the connectivity of these regions to the rest of the brain in an unconstrained analysis. Results show short range connections between nearby areas selective for dynamic objects, as well as connectivity to retinotopic cortex. Fiber streamlines originating in the large dynamic-LOC region showed longer range connections to regions of anterior temporal lobe and frontal lobe. Finally, we explicate the relationship of these tracts to the inferior longitudinal fasciculus, and the vertical occipital fasciculus (a tract connecting dorsal and ventral cortex (Yeatman et al., 2014)). A better account of the structural connectivity of this network and its relation to other major tracts in visual cortex will improve our understanding of its functional organization and inform future research. Meeting abstract presented at VSS 2015.

Collaboration


Dive into the John A. Pyles's collaboration.

Top Co-Authors

Avatar

Michael J. Tarr

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darren Seibert

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marlene Behrmann

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Abhinav Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol A. Jew

University of Rochester

View shared research outputs
Researchain Logo
Decentralizing Knowledge