Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kei Majima is active.

Publication


Featured researches published by Kei Majima.


NeuroImage | 2014

Decoding visual object categories from temporal correlations of ECoG signals

Kei Majima; Takeshi Matsuo; Keisuke Kawasaki; Kensuke Kawai; Nobuhito Saito; Isao Hasegawa; Yukiyasu Kamitani

How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories.


Nature Communications | 2016

Associative-memory representations emerge as shared spatial patterns of theta activity spanning the primate temporal cortex

Kiyoshi Nakahara; Ken Adachi; Keisuke Kawasaki; Takeshi Matsuo; Hirohito Sawahata; Kei Majima; Masaki Takeda; Sayaka Sugiyama; Ryota Nakata; Atsuhiko Iijima; Hisashi Tanigawa; Takafumi Suzuki; Yukiyasu Kamitani; Isao Hasegawa

Highly localized neuronal spikes in primate temporal cortex can encode associative memory; however, whether memory formation involves area-wide reorganization of ensemble activity, which often accompanies rhythmicity, or just local microcircuit-level plasticity, remains elusive. Using high-density electrocorticography, we capture local-field potentials spanning the monkey temporal lobes, and show that the visual pair-association (PA) memory is encoded in spatial patterns of theta activity in areas TE, 36, and, partially, in the parahippocampal cortex, but not in the entorhinal cortex. The theta patterns elicited by learned paired associates are distinct between pairs, but similar within pairs. This pattern similarity, emerging through novel PA learning, allows a machine-learning decoder trained on theta patterns elicited by a particular visual item to correctly predict the identity of those elicited by its paired associate. Our results suggest that the formation and sharing of widespread cortical theta patterns via learning-induced reorganization are involved in the mechanisms of associative memory representation.


bioRxiv | 2017

Deep image reconstruction from human brain activity

Guohua Shen; Tomoyasu Horikawa; Kei Majima; Yukiyasu Kamitani

Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.


bioRxiv | 2016

Preserved position information in high-level visual cortex with large receptive fields

Kei Majima; Paul Sukhanov; Tomoyasu Horikawa; Yukiyasu Kamitani

Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes (V1–V4, LOC, and FFA). We collected fMRI responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RFs centers. The results suggest that much of position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes, and is potentially available in later processing for recognition and behavior. Significance statement High-level ventral visual areas are thought to achieve position invariance with larger receptive fields at the cost of the loss of precise position information. However, larger receptive fields may not imply loss of position information at the population level. Here, multivoxel fMRI decoding reveals that high-level visual areas are predictive of an object’s position with similar accuracies to low-level visual areas, especially on the horizontal dimension, preserving the information potentially available for later processing.Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes (V1-V4, LOC, and FFA). We collected fMRI responses while subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball9s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. The results suggest that precise position information is available in population activity of higher visual cortex, and that it may be used in later neural processing for recognition and behavior.


Frontiers in Neuroinformatics | 2016

BrainLiner: A Neuroinformatics Platform for Sharing Time-Aligned Brain-Behavior Data

Makoto Takemiya; Kei Majima; Mitsuaki Tsukamoto; Yukiyasu Kamitani

Data-driven neuroscience aims to find statistical relationships between brain activity and task behavior from large-scale datasets. To facilitate high-throughput data processing and modeling, we created BrainLiner as a web platform for sharing time-aligned, brain-behavior data. Using an HDF5-based data format, BrainLiner treats brain activity and data related to behavior with the same salience, aligning both behavioral and brain activity data on a common time axis. This facilitates learning the relationship between behavior and brain activity. Using a common data file format also simplifies data processing and analyses. Properties describing data are unambiguously defined using a schema, allowing machine-readable definition of data. The BrainLiner platform allows users to upload and download data, as well as to explore and search for data from the web platform. A WebGL-based data explorer can visualize highly detailed neurophysiological data from within the web browser, and a data-driven search feature allows users to search for similar time windows of data. This increases transparency, and allows for visual inspection of neural coding. BrainLiner thus provides an essential set of tools for data sharing and data-driven modeling.


bioRxiv | 2018

End-to-end deep image reconstruction from human brain activity

Guohua Shen; Kshitij Dwivedi; Kei Majima; Tomoyasu Horikawa; Yukiyasu Kamitani

Deep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient to train a complex network with numerous parameters. Instead, a pre-trained DNN has served as a proxy for hierarchical visual representations, and fMRI data were used to decode individual DNN features of a stimulus image using a simple linear model, which were then passed to a reconstruction module. Here, we present our attempt to directly train a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We trained a generative adversarial network with an additional loss term defined in a high-level feature space (feature loss) using up to 6,000 training data points (natural images and the fMRI responses). The trained deep generator network was tested on an independent dataset, directly producing a reconstructed image given an fMRI pattern as the input. The reconstructions obtained from the proposed method showed resemblance with both natural and artificial test stimuli. The accuracy increased as a function of the training data size, though not outperforming the decoded feature-based method with the available data size. Ablation analyses indicated that the feature loss played a critical role to achieve accurate reconstruction. Our results suggest a potential for the end-to-end framework to learn a direct mapping between brain activity and perception given even larger datasets.


Frontiers in Neuroinformatics | 2018

Sparse Ordinal Logistic Regression and Its Application to Brain Decoding

Emi Satake; Kei Majima; Shuntaro Aoki; Yukiyasu Kamitani

Brain decoding with multivariate classification and regression has provided a powerful framework for characterizing information encoded in population neural activity. Classification and regression models are respectively used to predict discrete and continuous variables of interest. However, cognitive and behavioral parameters that we wish to decode are often ordinal variables whose values are discrete but ordered, such as subjective ratings. To date, there is no established method of predicting ordinal variables in brain decoding. In this study, we present a new algorithm, sparse ordinal logistic regression (SOLR), that combines ordinal logistic regression with Bayesian sparse weight estimation. We found that, in both simulation and analyses using real functional magnetic resonance imaging (fMRI) data, SOLR outperformed ordinal logistic regression with non-sparse regularization, indicating that sparseness leads to better decoding performance. SOLR also outperformed classification and linear regression models with the same type of sparseness, indicating the advantage of the modeling tailored to ordinal outputs. Our results suggest that SOLR provides a principled and effective method of decoding ordinal variables.


Cerebral Cortex | 2018

Heterogeneous Redistribution of Facial Subcategory Information Within and Outside the Face-Selective Domain in Primate Inferior Temporal Cortex

Naohisa Miyakawa; Kei Majima; Hirohito Sawahata; Keisuke Kawasaki; Takeshi Matsuo; Naoki Kotake; Takafumi Suzuki; Yukiyasu Kamitani; Isao Hasegawa

Abstract The inferior temporal cortex (ITC) contains neurons selective to multiple levels of visual categories. However, the mechanisms by which these neurons collectively construct hierarchical category percepts remain unclear. By comparing decoding accuracy with simultaneously acquired electrocorticogram (ECoG), local field potentials (LFPs), and multi-unit activity in the macaque ITC, we show that low-frequency LFPs/ECoG in the early evoked visual response phase contain sufficient coarse category (e.g., face) information, which is homogeneous and enhanced by spatial summation of up to several millimeters. Late-induced high-frequency LFPs additionally carry spike-coupled finer category (e.g., species, view, and identity of the face) information, which is heterogeneous and reduced by spatial summation. Face-encoding neural activity forms a cluster in similar cortical locations regardless of whether it is defined by early evoked low-frequency signals or late-induced high-gamma signals. By contrast, facial subcategory-encoding activity is distributed, not confined to the face cluster, and dynamically increases its heterogeneity from the early evoked to late-induced phases. These findings support a view that, in contrast to the homogeneous and static coarse category-encoding neural cluster, finer category-encoding clusters are heterogeneously distributed even outside their parent category cluster and dynamically increase heterogeneity along with the local cortical processing in the ITC.


bioRxiv | 2017

Position Information Encoded by Population Activity in Hierarchical Visual Areas

Kei Majima; Paul Sukhanov; Tomoyasu Horikawa; Yukiyasu Kamitani

Abstract Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior.


international conference of the ieee engineering in medicine and biology society | 2013

Experimental comparison of classification methods for key kinase identification for neurite elongation

Yuji Yoshida; Kei Majima; Tatsuya Yamada; Yuki Maruno; Yuichi Sakumura; Kazushi Ikeda

Kinases in a developing neuron play important roles in elongating a neurite with their complex interactions. To elucidate the effect of each kinase on neurite elongation and regeneration from a small set of experiments, we applied machine learning methods to synthetic datasets based on a biologically feasible model. The result showed the ridged partial least squares (RPLS) algorithm performed better than other standard algorithms such as naive Bayes classifier, support vector machines and random forest classification. This suggests the effectiveness of dimension reduction done in RPLS.

Collaboration


Dive into the Kei Majima's collaboration.

Top Co-Authors

Avatar

Yukiyasu Kamitani

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirohito Sawahata

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kensuke Kawai

Jichi Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge