Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Chevillet is active.

Publication


Featured researches published by Mark A. Chevillet.


The Journal of Neuroscience | 2011

Functional Correlates of the Anterolateral Processing Hierarchy in Human Auditory Cortex

Mark A. Chevillet; Maximilian Riesenhuber; Josef P. Rauschecker

Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.


The Journal of Neuroscience | 2013

Automatic Phoneme Category Selectivity in the Dorsal Auditory Stream

Mark A. Chevillet; Xiong Jiang; Josef P. Rauschecker; Maximilian Riesenhuber

Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.


Frontiers in Neuroinformatics | 2015

An automated images-to-graphs framework for high resolution connectomics.

William Gray Roncal; Dean M. Kleissas; Joshua T. Vogelstein; Priya Manavalan; Kunal Lillaney; Michael Pekala; Randal C. Burns; R. Jacob Vogelstein; Carey E. Priebe; Mark A. Chevillet; Gregory D. Hager

Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM) have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification). In this manuscript we present the first fully-automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction). To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available in support of eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.


NeuroImage | 2017

Semantic attributes are encoded in human electrocorticographic signals during visual object recognition.

Kyle M. Rupp; Matthew J. Roos; Griffin Milsap; Carlos A. Caceres; Christopher R. Ratto; Mark A. Chevillet; Nathan E. Crone; Michael Wolmetz

Abstract Non‐invasive neuroimaging studies have shown that semantic category and attribute information are encoded in neural population activity. Electrocorticography (ECoG) offers several advantages over non‐invasive approaches, but the degree to which semantic attribute information is encoded in ECoG responses is not known. We recorded ECoG while patients named objects from 12 semantic categories and then trained high‐dimensional encoding models to map semantic attributes to spectral‐temporal features of the task‐related neural responses. Using these semantic attribute encoding models, untrained objects were decoded with accuracies comparable to whole‐brain functional Magnetic Resonance Imaging (fMRI), and we observed that high‐gamma activity (70–110 Hz) at basal occipitotemporal electrodes was associated with specific semantic dimensions (manmade‐animate, canonically large‐small, and places‐tools). Individual patient results were in close agreement with reports from other imaging modalities on the time course and functional organization of semantic processing along the ventral visual pathway during object recognition. The semantic attribute encoding model approach is critical for decoding objects absent from a training set, as well as for studying complex semantic encodings without artificially restricting stimuli to a small number of semantic categories. HighlightsRecognized objects can be accurately decoded from intracranial electrode responses.Semantic attribute encoding models generalize to new objects.Timing of semantic decoding is consistent with prior electrophysiological studies.Animacy and canonical size are represented in ventral temporal ECoG signals.


Nature Methods | 2018

A community-developed open-source computational ecosystem for big neuro data

Joshua T. Vogelstein; Eric S. Perlman; Benjamin Falk; Alex Baden; William Gray Roncal; Vikram Chandrashekhar; Forrest Collman; Sharmishtaa Seshamani; Jesse L. Patsolic; Kunal Lillaney; Michael M. Kazhdan; Robert Hider; Derek Pryor; Jordan Matelsky; Timothy Gion; Priya Manavalan; Brock A. Wester; Mark A. Chevillet; Eric T. Trautman; Khaled Khairy; Eric Bridgeford; Dean M. Kleissas; Daniel J. Tward; Ailey K. Crow; Brian Hsueh; Matthew Wright; Michael I. Miller; Stephen J. Smith; R. Jacob Vogelstein; Karl Deisseroth

Big imaging data is becoming more prominent in brain sciences across spatiotemporal scales and phylogenies. We have developed a computational ecosystem that enables storage, visualization, and analysis of these data in the cloud, thusfar spanning 20+ publications and 100+ terabytes including nanoscale ultrastructure, microscale synaptogenetic diversity, and mesoscale whole brain connectivity, making NeuroData the largest and most diverse open repository of brain data.


Neuron | 2011

Dysregulation of limbic and auditory networks in tinnitus.

Amber M. Leaver; Laurent Renier; Mark A. Chevillet; Susan Morgan; Hung J. Kim; Josef P. Rauschecker


british machine vision conference | 2015

VESICLE: Volumetric Evaluation of Synaptic Inferfaces using Computer Vision at Large Scale.

William Gray Roncal; Michael Pekala; Verena Kaynig-Fittkau; Dean M. Kleissas; Joshua T. Vogelstein; Hanspeter Pfister; Randal C. Burns; R. Jacob Vogelstein; Mark A. Chevillet; Gregory D. Hager


arXiv: Computation and Language | 2016

Evaluating semantic models with word-sentence relatedness

Kimberly Glasgow; Matthew J. Roos; Amy J. Haufler; Mark A. Chevillet; Michael Wolmetz


Neuron | 2018

Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes

Xiong Jiang; Mark A. Chevillet; Josef P. Rauschecker; Maximilian Riesenhuber


arXiv: Quantitative Methods | 2017

Neural Reconstruction Integrity: A metric for assessing the connectivity of reconstructed neural networks

Elizabeth P. Reilly; Jeffrey S. Garretson; William Gray Roncal; Dean M. Kleissas; Brock A. Wester; Mark A. Chevillet; Matthew J. Roos

Collaboration


Dive into the Mark A. Chevillet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Josef P. Rauschecker

Georgetown University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maximilian Riesenhuber

Georgetown University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge