Satohiro Tajima
University of Geneva
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Satohiro Tajima.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Gouki Okazawa; Satohiro Tajima; Hidehiko Komatsu
Significance Our visual world is richly decorated with a great variety of textures, but the brain mechanisms underlying texture perception remain poorly understood. Here we studied the selectivity of neurons in visual area V4 of macaque monkey with synthetic textures having known combinations of higher-order image statistics. We found that V4 neurons typically respond best to particular sparse combinations of these statistics. We also found that population responses of texture-selective V4 neurons can explain human texture discrimination and categorization. Because the statistics of each image can be computed from responses of upstream neurons in visual area V1, our results provide a clear account of how the visual system processes local image features to create the global perception of texture in natural images. Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception.
PLOS Computational Biology | 2015
Satohiro Tajima; Toru Yanagawa; Naotaka Fujii; Taro Toyoizumi
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness.
The Journal of Neuroscience | 2010
Satohiro Tajima; Masataka Watanabe; Chihiro Imai; Kenichi Ueno; Takeshi Asamizuya; Pei Sun; Keiji Tanaka; Kang Cheng
Spatial context in vision has profound effects on neural responses and perception. Recent animal studies suggest that the effect of surround on a central stimulus can dramatically change its character depending on the contrast of the center stimulus, but such a drastic change has not been demonstrated in the human visual cortex. To examine the dependency of the surround effect on the contrast of the center stimulus, we conducted an functional magnetic resonance imaging experiment by using a low or a high contrast in the center region while the surround contrast was sinusoidally modulated between the two contrasts. We found that the blood oxygen level-dependent response in human V1 corresponding to the center region was differentially modulated by the surround contrast, depending crucially on the center contrast: whereas a suppressive effect was observed in conditions in which the center contrast was high, a facilitative effect was seen in conditions where the center contrast was low.
Nature Communications | 2016
Satohiro Tajima; Jan Drugowitsch; Alexandre Pouget
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Scientific Reports | 2016
Chihiro I. Tajima; Satohiro Tajima; Kowa Koida; Hidehiko Komatsu; Kazuyuki Aihara; Hideyuki Suzuki
Categorical perception is a ubiquitous function in sensory information processing, and is reported to have important influences on the recognition of presented and/or memorized stimuli. However, such complex interactions among categorical perception and other aspects of sensory processing have not been explained well in a unified manner. Here, we propose a recurrent neural network model to process categorical information of stimuli, which approximately realizes a hierarchical Bayesian estimation on stimuli. The model accounts for a wide variety of neurophysiological and cognitive phenomena in a consistent framework. In particular, the reported complexity of categorical effects, including (i) task-dependent modulation of neural response, (ii) clustering of neural population representation, (iii) temporal evolution of perceptual color memory, and (iv) a non-uniform discrimination threshold, are explained as different aspects of a single model. Moreover, we directly examine key model behaviors in the monkey visual cortex by analyzing neural population dynamics during categorization and discrimination of color stimuli. We find that the categorical task causes temporally-evolving biases in the neuronal population representations toward the focal colors, which supports the proposed model. These results suggest that categorical perception can be achieved by recurrent neural dynamics that approximates optimal probabilistic inference in the changing environment.
Cerebral Cortex | 2016
Gouki Okazawa; Satohiro Tajima; Hidehiko Komatsu
Complex shape and texture representations are known to be constructed from V1 along the ventral visual pathway through areas V2 and V4, but the underlying mechanism remains elusive. Recent study suggests that, for processing of textures, a collection of higher-order image statistics computed by combining V1-like filter responses serves as possible representations of textures both in V2 and V4. Here, to gain a clue for how these image statistics are processed in the extrastriate visual areas, we compared neuronal responses to textures in V2 and V4 of macaque monkeys. For individual neurons, we adaptively explored their preferred textures from among thousands of naturalistic textures and fitted the obtained responses using a combination of V1-like filter responses and higher-order statistics. We found that, while the selectivity for image statistics was largely comparable between V2 and V4, V4 showed slightly stronger sensitivity to the higher-order statistics than V2. Consistent with that finding, V4 responses were reduced to a greater extent than V2 responses when the monkeys were shown spectrally matched noise images that lacked higher-order statistics. We therefore suggest that there is a gradual development in representation of higher-order features along the ventral visual hierarchy.
IEEE Transactions on Image Processing | 2015
Satohiro Tajima; Kazuteru Komine
Perception of color varies markedly between individuals because of differential expression of photopigments in retinal cones. However, it has been difficult to quantify the individual cognitive variation in colored scene and to predict its complex impacts on the behaviors. We developed a method for quantifying and visualizing information loss and gain resulting from individual differences in spectral sensitivity based on visual salience. We first modeled the visual salience for color-deficient observers, and found that the predicted losses and gains in local image salience derived from normal and color-blind models were correlated with the subjective judgment of image saliency in psychophysical experiments, i.e., saliency loss predicted reduced image preference in color-deficient observers. Moreover, saliency-guided image manipulations sufficiently compensated for individual differences in saliency. This visual saliency approach allows for quantification of information extracted from complex visual scenes and can be used as an image compensation to enhance visual accessibility by color-deficient individuals.
PLOS ONE | 2010
Satohiro Tajima; Masato Okada
The power law provides an efficient description of amplitude spectra of natural scenes. Psychophysical studies have shown that the forms of the amplitude spectra are clearly related to human visual performance, indicating that the statistical parameters in natural scenes are represented in the nervous system. However, the underlying neuronal computation that accounts for the perception of the natural image statistics has not been thoroughly studied. We propose a theoretical framework for neuronal encoding and decoding of the image statistics, hypothesizing the elicited population activities of spatial-frequency selective neurons observed in the early visual cortex. The model predicts that frequency-tuned neurons have asymmetric tuning curves as functions of the amplitude spectra falloffs. To investigate the ability of this neural population to encode the statistical parameters of the input images, we analyze the Fisher information of the stochastic population code, relating it to the psychophysically measured human ability to discriminate natural image statistics. The nature of discrimination thresholds suggested by the computational model is consistent with experimental data from previous studies. Of particular interest, a reported qualitative disparity between performance in fovea and parafovea can be explained based on the distributional difference over preferred frequencies of neurons in the current model. The threshold shows a peak at a small falloff parameter when the neuronal preferred spatial frequencies are narrowly distributed, whereas the threshold peak vanishes for a neural population with a more broadly distributed frequency preference. These results demonstrate that the distributional property of neuronal stimulus preference can play a crucial role in linking microscopic neurophysiological phenomena and macroscopic human behaviors.
Neural Computation | 2010
Satohiro Tajima; Hiromasa Takemura; Ikuya Murakami; Masato Okada
Spatiotemporal context in sensory stimulus has profound effects on neural responses and perception, and it sometimes affects task difficulty. Recently reported experimental data suggest that human detection sensitivity to motion in a target stimulus can be enhanced by adding a slow surrounding motion in an orthogonal direction, even though the illusory motion component caused by the surround is not relevant to the task. It is not computationally clear how the task-irrelevant component of motion modulates the subjects sensitivity to motion detection. In this study, we investigated the effects of encoding biases on detection performance by modeling the stochastic neural population activities. We modeled two types of modulation on the population activity profiles caused by a contextual stimulus: one type is identical to the activity evoked by a physical change in the stimulus, and the other is expressed more simply in terms of response gain modulation. For both encoding schemes, the motion detection performance of the ideal observer is enhanced by a task-irrelevant, additive motion component, replicating the properties observed for real subjects. The success of these models suggests that human detection sensitivity can be characterized by a noisy neural encoding that limits the resolution of information transmission in the cortical visual processing pathway. On the other hand, analyses of the neuronal contributions to the task predict that the effective cell populations differ between the two encoding schemes, posing a question concerning the decoding schemes that the nervous system used during illusory states.
Journal of Vision | 2011
Hiromasa Takemura; Satohiro Tajima; Ikuya Murakami
When two random-dot patterns moving in different directions are superimposed, motion appears coherent or transparent depending on the directional difference. In addition, when a pattern is surrounded by another pattern that is moving, the perceived motion of the central stimulus is biased away from the direction of the surrounding motion. That phenomenon is known as induced motion. How is the perception of motion coherence and transparency modulated by surrounding motion? It was found that two random-dot horizontal motions surrounded by another stimulus in downward motion appeared to move in two oblique directions: left-up and right-up. Consequently, when motion transparency occurs, each of the two motions interacts independently with the induced motion direction. Furthermore, for a central stimulus consisting of two physical motions in left-up and right-up directions, the presence of the surrounding stimulus in a vertical motion modulated the perceptual solution of motion coherence/transparency such that if interactions with an induced motion signal narrow the apparent directional difference between the two central motions, then motion coherence is preferred over motion transparency. Therefore, whether a moving stimulus is perceived as coherent or transparent is determined based on the internal representation of motion directions, which can be altered by spatial interactions between adjacent regions.
Collaboration
Dive into the Satohiro Tajima's collaboration.
National Institute of Information and Communications Technology
View shared research outputs