Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayan Sengupta is active.

Publication


Featured researches published by Ayan Sengupta.


GigaScience | 2016

2015 Brainhack Proceedings

R. Cameron Craddock; Pierre Bellec; Daniel S. Margules; B. Nolan Nichols; Jörg P. Pfannmöller; AmanPreet Badhwar; David N. Kennedy; Jean-Baptiste Poline; Roberto Toro; Ben Cipollini; Ariel Rokem; Daniel Clark; Krzysztof J. Gorgolewski; Daniel J. Clark; Samir Das; Cécile Madjar; Ayan Sengupta; Zia Mohades; Sebastien Dery; Weiran Deng; Eric Earl; Damion V. Demeter; Kate Mills; Glad Mihai; Luka Ruzic; Nick Ketz; Andrew Reineberg; Marianne C. Reddan; Anne-Lise Goddings; Javier Gonzalez-Castillo

Table of contentsI1 Introduction to the 2015 Brainhack ProceedingsR. Cameron Craddock, Pierre Bellec, Daniel S. Margules, B. Nolan Nichols, Jörg P. PfannmöllerA1 Distributed collaboration: the case for the enhancement of Brainspell’s interfaceAmanPreet Badhwar, David Kennedy, Jean-Baptiste Poline, Roberto ToroA2 Advancing open science through NiDataBen Cipollini, Ariel RokemA3 Integrating the Brain Imaging Data Structure (BIDS) standard into C-PACDaniel Clark, Krzysztof J. Gorgolewski, R. Cameron CraddockA4 Optimized implementations of voxel-wise degree centrality and local functional connectivity density mapping in AFNIR. Cameron Craddock, Daniel J. ClarkA5 LORIS: DICOM anonymizerSamir Das, Cécile Madjar, Ayan Sengupta, Zia MohadesA6 Automatic extraction of academic collaborations in neuroimagingSebastien DeryA7 NiftyView: a zero-footprint web application for viewing DICOM and NIfTI filesWeiran DengA8 Human Connectome Project Minimal Preprocessing Pipelines to NipypeEric Earl, Damion V. Demeter, Kate Mills, Glad Mihai, Luka Ruzic, Nick Ketz, Andrew Reineberg, Marianne C. Reddan, Anne-Lise Goddings, Javier Gonzalez-Castillo, Krzysztof J. GorgolewskiA9 Generating music with resting-state fMRI dataCaroline Froehlich, Gil Dekel, Daniel S. Margulies, R. Cameron CraddockA10 Highly comparable time-series analysis in NitimeBen D. FulcherA11 Nipype interfaces in CBRAINTristan Glatard, Samir Das, Reza Adalat, Natacha Beck, Rémi Bernard, Najmeh Khalili-Mahani, Pierre Rioux, Marc-Étienne Rousseau, Alan C. EvansA12 DueCredit: automated collection of citations for software, methods, and dataYaroslav O. Halchenko, Matteo Visconti di Oleggio CastelloA13 Open source low-cost device to register dog’s heart rate and tail movementRaúl Hernández-Pérez, Edgar A. Morales, Laura V. CuayaA14 Calculating the Laterality Index Using FSL for Stroke Neuroimaging DataKaori L. Ito, Sook-Lei LiewA15 Wrapping FreeSurfer 6 for use in high-performance computing environmentsHans J. JohnsonA16 Facilitating big data meta-analyses for clinical neuroimaging through ENIGMA wrapper scriptsErik Kan, Julia Anglin, Michael Borich, Neda Jahanshad, Paul Thompson, Sook-Lei LiewA17 A cortical surface-based geodesic distance package for PythonDaniel S Margulies, Marcel Falkiewicz, Julia M HuntenburgA18 Sharing data in the cloudDavid O’Connor, Daniel J. Clark, Michael P. Milham, R. Cameron CraddockA19 Detecting task-based fMRI compliance using plan abandonment techniquesRamon Fraga Pereira, Anibal Sólon Heinsfeld, Alexandre Rosa Franco, Augusto Buchweitz, Felipe MeneguzziA20 Self-organization and brain functionJörg P. Pfannmöller, Rickson Mesquita, Luis C.T. Herrera, Daniela DenticoA21 The Neuroimaging Data Model (NIDM) APIVanessa Sochat, B Nolan NicholsA22 NeuroView: a customizable browser-base utilityAnibal Sólon Heinsfeld, Alexandre Rosa Franco, Augusto Buchweitz, Felipe MeneguzziA23 DIPY: Brain tissue classificationJulio E. Villalon-Reina, Eleftherios Garyfallidis


Scientific Data | 2016

A studyforrest extension, retinotopic mapping and localization of higher visual areas

Ayan Sengupta; Falko R. Kaule; J. Swaroop Guntupalli; Michael B. Hoffmann; Christian Häusler; Jörg Stadler; Michael Hanke

The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas—such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research—with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream.


NeuroImage | 2017

The effect of acquisition resolution on orientation decoding from V1 BOLD fMRI at 7 T

Ayan Sengupta; Renat Yakupov; Oliver Speck; Stefan Pollmann; Michael Hanke

Abstract A decade after it was shown that the orientation of visual grating stimuli can be decoded from human visual cortex activity by means of multivariate pattern classification of BOLD fMRI data, numerous studies have investigated which aspects of neuronal activity are reflected in BOLD response patterns and are accessible for decoding. However, it remains inconclusive what the effect of acquisition resolution on BOLD fMRI decoding analyses is. The present study is the first to provide empirical ultra high‐field fMRI data recorded at four spatial resolutions (0.8 mm, 1.4 mm, 2 mm, and 3 mm isotropic voxel size) on this topic — in order to test hypotheses on the strength and spatial scale of orientation discriminating signals. We present detailed analysis, in line with predictions from previous simulation studies, about how the performance of orientation decoding varies with different acquisition resolutions. Moreover, we also examine different spatial filtering procedures and its effects on orientation decoding. Here we show that higher‐resolution scans with subsequent down‐sampling or low‐pass filtering yield no benefit over scans natively recorded in the corresponding lower resolution regarding decoding accuracy. The orientation‐related signal in the BOLD fMRI data is spatially broadband in nature, includes both high spatial frequency components, as well as large‐scale biases previously proposed in the literature. Moreover, we found above chance‐level contribution from large draining veins to orientation decoding. Acquired raw data were publicly released to facilitate further investigation. HighlightsBest decoding performance at 2.0 mm acquisition resolution (four tested resolutions: 0.8, 1.4, 2.0, and 3.0 mm isotropic).Recording high‐resolution data with subsequent spatial downsampling does not improve decoding performance.Most informative spatial frequency band covers ˜4.5 mm to 1.6 cm wavelength.Veins carry orientation specific signals, and contribute to decoding; potential source of complex spatial signal aliasing.


bioRxiv | 2016

Simultaneous fMRI and eye gaze recordings during prolonged natural stimulation -- a studyforrest extension

Michael Hanke; Nico Adelhöfer; Daniel Kottke; Vittorio Iacovella; Ayan Sengupta; Falko R. Kaule; Roland Nigbur; Alexander Q. Waite; Florian Baumgartner; Jörg Stadler

Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial within-subject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants’ eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting — to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes.


bioRxiv | 2018

Imaging human cortical responses to intraneural microstimulation using magnetoencephalography

George C. O'Neill; Roger H Watkins; Rochelle Ackerley; Eleanor L. Barratt; Ayan Sengupta; Michael Asghar; Rosa Sanchez; Matthew J. Brookes; Paul Glover; Johan Wessberg

The sensation of touch in the glabrous skin of the human hand is conveyed by thousands of fast-conducting mechanoreceptive afferents, which can be categorised into four distinct types. The spiking properties of these afferents in the periphery in response to varied tactile stimuli are well-characterised, but relatively little is known about the spatiotemporal properties of the neural representations of these different receptor types in the human cortex. Here, we use the novel methodological combination of single-unit intraneural microstimulation (INMS) with magnetoencephalography (MEG) to localise cortical representations of individual touch afferents in humans, by measuring the extracranial magnetic fields from neural currents. We found that by assessing the modulation of the beta (13-30 Hz) rhythm during single-unit INMS, significant changes in oscillatory amplitude occur in the contralateral primary somatosensory cortex within and across a group of fast adapting type I mechanoreceptive 20 afferents, which corresponded well to the induced response from matched vibrotactile stimulation. Combining the spatiotemporal specificity of MEG with the selective single-unit stimulation of INMS enables the interrogation of the central representations of different aspects of tactile afferent signalling within the human cortices. The fundamental finding that single-unit INMS ERD responses are robust and consistent with natural somatosensory stimuli will permit us to more dynamically probe the central nervous system responses in humans, to address questions about the processing of touch from the different classes of mechanoreceptive afferents and the effects of varying the stimulus frequency and patterning.


bioRxiv | 2018

The effect of acquisition resolution on orientation decoding from V1: comparison of 3T and 7T

Ayan Sengupta; Oliver Speck; Renat Yakupov; Martin Kanowski; Claus Tempelmann; Stefan Pollmann; Michael Hanke

Previously published results indicate that the accuracy of decoding visual orientation from 7 Tesla fMRI data of V1 peaks at spatial acquisition resolutions that are routinely accessible with more conventional 3 Tesla scanners. This study directly compares the decoding performance between a 3 Tesla and a 7 Tesla dataset that were acquired using the same stimulation paradigm by applying an identical analysis procedure. The results indicate that decoding models built on 3 Tesla data are comparatively impaired. Moreover, we found no evidence for a strong coupling of BOLD signal change magnitude or temporal signal to noise ratio (tSNR) with decoding performance. Direct enhancement of tSNR via multiband fMRI acquisition at the same resolution did not translate into improved decoding performance. Additional voxel selection can boost 3 Tesla decoding performance to the 7 Tesla level only at a 3 mm acquisition resolution. In both datasets the BOLD signal available for orientation decoding is spatially broadband, but, consistent with the size of the BOLD point-spread-function, decoding models at 3 Tesla utilize spatially coarser image components.


F1000Research | 2018

Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI

Ayan Sengupta; Stefan Pollmann; Michael Hanke

Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation – primarily in the visual cortex. Previous research indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we applied an analysis strategy from a previous study on decoding visual orientation from V1 to publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex. The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.


Data in Brief | 2017

Ultra high-field (7 T) multi-resolution fMRI data for orientation decoding in visual cortex

Ayan Sengupta; Renat Yakupov; Oliver Speck; Stefan Pollmann; Michael Hanke

Multivariate pattern classification methods have been successfully applied to decode orientation of visual grating stimuli from BOLD fMRI activity recorded in human visual cortex (Kamitani and Tong, 2005; Haynes and Rees, 2005) [12], [10]. Though there has been extensive research investigating the true spatial scale of the orientation specific signals (Op de Beeck, 2010; Swisher et al., 2010; Alink et al., 2013; Freeman et al., 2011, 2013) [2], [15], [1], [4], [5], it remained inconclusive what spatial acquisition resolution is required, or is optimal, for decoding analyses. The research article entitled “The effect of acquisition resolution on orientation decoding from V1 BOLD fMRI at 7 T” Sengupta et al. (2017) [14] studied the effect of spatial acquisition resolution and also analyzed the strength and spatial scale of orientation discriminating signals. In this article, for the first time, we present empirical ultra high-field fMRI data, obtained as a part of the aforementioned study, which were recorded at four spatial resolutions (0.8 mm, 1.4 mm, 2 mm, and 3 mm isotropic voxel size) for orientation decoding in visual cortex. The dataset is compliant with the BIDS (Brain Imaging Data Structure) format, and freely available from the OpenfMRI portal (dataset accession number: http://openfmri.org/dataset/ds000113c ds000113c).


bioRxiv | 2016

An extension of the studyforrest dataset for vision research

Ayan Sengupta; Falko R. Kaule; J. Swaroop Guntupalli; Michael B. Hoffmann; Christan Häusler; Jörg Stadler; Michael Hanke

The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimag-ing dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas — such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research — with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream.


Scientific Data | 2016

A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation

Michael Hanke; Nico Adelhöfer; Daniel Kottke; Vittorio Iacovella; Ayan Sengupta; Falko R. Kaule; Roland Nigbur; Alexander Q. Waite; Florian Baumgartner; Jörg Stadler

Collaboration


Dive into the Ayan Sengupta's collaboration.

Top Co-Authors

Avatar

Michael Hanke

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Falko R. Kaule

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Jörg Stadler

Leibniz Institute for Neurobiology

View shared research outputs
Top Co-Authors

Avatar

Stefan Pollmann

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Oliver Speck

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Renat Yakupov

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Florian Baumgartner

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Michael B. Hoffmann

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Nico Adelhöfer

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge