Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akiyuki Anzai is active.

Publication


Featured researches published by Akiyuki Anzai.


Nature Neuroscience | 2007

Neurons in monkey visual area V2 encode combinations of orientations.

Akiyuki Anzai; Xinmiao Peng; David C. Van Essen

Contours and textures are important attributes of object surfaces, and are often described by combinations of local orientations in visual images. To elucidate the neural mechanisms underlying contour and texture processing, we examined receptive field (RF) structures of neurons in visual area V2 of the macaque monkey for encoding combinations of orientations. By measuring orientation tuning at several locations within the classical RF, we found that a majority (70%) of V2 neurons have similar orientation tuning throughout the RF. However, many others have RFs containing subregions tuned to different orientations, most commonly about 90° apart. By measuring interactions between two positions within the RF, we found that approximately one-third of neurons show inhibitory interactions that make them selective for combinations of orientations. These results indicate that V2 neurons could play an important role in analyzing contours and textures and could provide useful cues for surface segmentation.


Visual Neuroscience | 1999

Linear and nonlinear contributions to orientation tuning of simple cells in the cat's striate cortex

Justin L. Gardner; Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman

Orientation selectivity is one of the most conspicuous receptive-field (RF) properties that distinguishes neurons in the striate cortex from those in the lateral geniculate nucleus (LGN). It has been suggested that orientation selectivity arises from an elongated array of feedforward LGN inputs (Hubel & Wiesel, 1962). Others have argued that cortical mechanisms underlie orientation selectivity (e.g. Sillito, 1975; Somers et al., 1995). However, isolation of each mechanism is experimentally difficult and no single study has analyzed both processes simultaneously to address their relative roles. An alternative approach, which we have employed in this study, is to examine the relative contributions of linear and nonlinear mechanisms in sharpening orientation tuning. Since the input stage of simple cells is remarkably linear, the nonlinear contribution can be attributed solely to cortical factors. Therefore, if the nonlinear component is substantial compared to the linear contribution, it can be concluded that cortical factors play a prominent role in sharpening orientation tuning. To obtain the linear contribution, we first measure RF profiles of simple cells in the cats striate cortex using a binary m-sequence noise stimulus. Then, based on linear spatial summation of the RF profile, we obtain a predicted orientation-tuning curve, which represents the linear contribution. The nonlinear contribution is estimated as the difference between the predicted tuning curve and that measured with drifting sinusoidal gratings. We find that measured tuning curves are generally more sharply tuned for orientation than predicted curves, which indicates that the linear mechanism is not enough to account for the sharpness of orientation-tuning. Therefore, cortical factors must play an important role in sharpening orientation tuning of simple cells. We also examine the relationship of RF shape (subregion aspect ratio) and size (subregion length and width) to orientation-tuning halfwidth. As expected, predicted tuning halfwidths are found to depend strongly on both subregion length and subregion aspect ratio. However, we find that measured tuning halfwidths show only a weak correlation with subregion aspect ratio, and no significant correlation with RF length and width. These results suggest that cortical mechanisms not only serve to sharpen orientation tuning, but also serve to make orientation tuning less dependent on the size and shape of the RF. This ensures that orientation is represented equally well regardless of RF size and shape.


Nature Neuroscience | 2001

Joint-encoding of motion and depth by visual cortical neurons: neural basis of the Pulfrich effect.

Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman

Motion and stereoscopic depth are fundamental parameters of the structural analysis of visual scenes. Because they are defined by a difference in object position, either over time or across the eyes, a common neural machinery may be used for encoding these attributes. To examine this idea, we analyzed responses of binocular complex cells in the cat striate cortex to stimuli of various intra- and interocular spatial and temporal shifts. We found that most neurons exhibit space–time-oriented response profiles in both monocular and binocular domains. This indicates that these neurons encode motion and depth jointly, and it explains phenomena such as the Pulfrich effect. We also found that the relationship between neuronal tuning of motion and depth conforms to that predicted by the use of motion parallax as a depth cue. These results demonstrate a joint-encoding of motion and depth at an early cortical stage.


The Journal of Neuroscience | 2011

Coding of Stereoscopic Depth Information in Visual Areas V3 and V3A

Akiyuki Anzai; Syed A. Chowdhury; Gregory C. DeAngelis

The process of stereoscopic depth perception is thought to begin with the analysis of absolute binocular disparity, the difference in position of corresponding features in the left and right eye images with respect to the points of fixation. Our sensitivity to depth, however, is greater when depth judgments are based on relative disparity, the difference between two absolute disparities, compared to when they are based on absolute disparity. Therefore, the visual system is thought to compute relative disparities for fine depth discrimination. Functional magnetic resonance imaging studies in humans and monkeys have suggested that visual areas V3 and V3A may be specialized for stereoscopic depth processing based on relative disparities. In this study, we measured absolute and relative disparity-tuning of neurons in V3 and V3A of alert fixating monkeys, and we compared their basic tuning properties with those published previously for other visual areas. We found that neurons in V3 and V3A predominantly encode absolute, not relative, disparities. We also found that basic parameters of disparity-tuning in V3 and V3A are similar to those from other extrastriate visual areas. Finally, by comparing single-unit activity with multi-unit activity measured at the same recording site, we demonstrate that neurons with similar disparity selectivity are clustered in both V3 and V3A. We conclude that areas V3 and V3A are not particularly specialized for processing stereoscopic depth information compared to other early visual areas, at least with respect to the tuning properties that we have examined.


Visual Neuroscience | 1995

Contrast coding by cells in the cat's striate cortex: Monocular vs. binocular detection

Akiyuki Anzai; Marcus A. Bearse; Ralph D. Freeman; Daqing Cai

Many psychophysical studies of various visual tasks show that performance is generally better for binocular than for monocular observation. To investigate the physiological basis of this binocular advantage, we have recorded, under monocular and binocular stimulation, contrast response functions for single cells in the striate cortex of anesthetized and paralyzed cats. We applied receiver operating characteristic analysis to our data to obtain monocular and binocular neurometric functions for each cell. A contrast threshold and a slope were extracted from each neurometric function and were compared for monocular and binocular stimulation. We found that contrast thresholds and slopes varied from cell to cell but, in general, binocular contrast thresholds were lower, and binocular slopes were steeper, than their monocular counterparts. The binocular advantage ratio, the ratio of monocular to binocular thresholds for individual cells, was, on average, slightly higher than the typical ratios reported in human psychophysics. No single rule appeared to account for the various degrees of binocular summation seen in individual cells. We also found that the proportion of cells likely to contribute to contrast detection increased with stimulus contrast. Less contrast was required under binocular than under monocular stimulation to obtain the same proportion of cells that contribute to contrast detection. Based on these results, we suggest that behavioral contrast detection is carried out by a small proportion of cells that are relatively sensitive to near-threshold contrasts. Contrast sensitivity functions (CSFs) for the cell population, estimated from this hypothesis, agree well with behavioral data in both the shape of the CSF and the ratio of binocular to monocular sensitivities. We conclude that binocular summation in behavioral contrast detection may be attributed to the binocular superiority in contrast sensitivity of a small proportion of cells which are responsible for threshold contrast detection.


Journal of Neurophysiology | 1999

Neural Mechanisms for Processing Binocular Information I. Simple Cells

Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman


Proceedings of the National Academy of Sciences of the United States of America | 1997

Neural mechanisms underlying binocular fusion and stereopsis: Position vs. phase

Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman


Journal of Neurophysiology | 1999

Neural Mechanisms for Encoding Binocular Disparity: Receptive Field Position Versus Phase

Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman


Proceedings of the National Academy of Sciences of the United States of America | 1995

Receptive field structure in the visual cortex: does selective stimulation induce plasticity?

Gregory C. DeAngelis; Akiyuki Anzai; Izumi Ohzawa; Ralph D. Freeman


Journal of Vision | 2010

Receptive field structure of monkey V2 neurons for encoding orientation contrast

Akiyuki Anzai; David C. Van Essen; Xinmiao Peng; Jay Hegdé

Collaboration


Dive into the Akiyuki Anzai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Van Essen

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Syed A. Chowdhury

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Xinmiao Peng

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jay Hegdé

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin L. Gardner

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge