Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazumichi Matsumiya is active.

Publication


Featured researches published by Kazumichi Matsumiya.


Vision Research | 2006

Size-invariant but viewpoint-dependent representation of faces

Yunjo Lee; Kazumichi Matsumiya; Hugh R. Wilson

The present study investigated the role of size and view on face discrimination, using a novel set of synthetic face stimuli. Face discrimination thresholds were measured using a 2AFC match-to-sample paradigm, where faces were discriminated from a mean face. In Experiment 1, which assessed the effect of size alone, subjects had to match faces that differed in size up to four-fold. In Experiment 2 where only viewpoint was manipulated, a target face was presented at one of four different views (0 degree front, 6.7 degrees, 13.3 degrees, and 20 degrees side) and subsequent matches appeared either at the same or different view. Experiment 3 investigated how face view interacts with size changes, and subjects matched faces differing both in size and view. The results were as follows: (1) size changes up to four-fold had no effect on face discrimination; (2) threshold for matching different face views increased with angular difference from frontal view; (3) size differences across different views had no effect on face discrimination. Additionally, the present study found a perceptual boundary between 6.7 degrees and 13.3 degrees side views, grouping 0 degrees front and 6.7 degrees side views together and 13.3 degrees and 20 degrees side views together. This suggests categorical perception of face view. The present study concludes that face view and size are processed by parallel mechanisms.


Vision Research | 2001

Apparent size of an object remains uncompressed during presaccadic compression of visual space

Kazumichi Matsumiya; Keiji Uchikawa

It is well known that compression of visual space occurs near the saccade goal when visual stimuli are briefly flashed at various locations on a visual reference just before a saccade. We investigated how presaccadic compression of visual space affected the apparent size of an object. In the first experiment, subjects were instructed to report the apparent number of multiple bars briefly presented around the time of saccade onset. The reported number of four bars began to decline at the 50 ms mark before a saccade and reached a minimum near the saccade onset. This confirms that the compression of visual space occurs just before saccades. In the second experiment, subjects judged the apparent width of a rectangle (a single element) or four bars (four elements) presented just before saccades. The apparent width of the four-bar stimulus was compressed just before saccades, but that of the rectangle stimulus was not compressed. Experiment 3 shows that the width compression of the four-bar stimulus is consistent with the width change predicted by compression of position. These findings indicate that the shape of a single object is not distorted at the saccade goal during presaccadic compression of visual space. In addition, experiment 4 indicates that the apparent width of a flashed stimulus just before saccades depends on the processing of global shape. This extends the definition of a visual object during presaccadic compression of visual space to not only a solid element but also a constellation of multiple elements. Furthermore, the results from these experiments suggest that presaccadic compression of visual space does not prevent object recognition underlying an attentional mechanism in generating saccadic eye movements.


Journal of Vision | 2009

World-centered perception of 3D object motion during visually guided self-motion

Kazumichi Matsumiya; Hiroshi Ando

We investigated how human observers estimate an objects three-dimensional (3D) motion trajectory during visually guided self-motion. Observers performed a task in an immersive virtual reality system consisting of front, left, right, and floor screens of a room-sized cube. In one experiment, we found that the presence of an optic flow simulating forward self-motion in the background induces a world-centered frame of reference, instead of an observer-centered frame of reference, for the perceived rotation of a 3D surface from motion. In another experiment, we found that the perceived direction of 3D object motion is biased toward a world-centered frame of reference when an optic flow pattern is presented in the background. In a third experiment, we confirmed that the effect of the optic flow pattern on the perceived direction of 3D object motion was not caused only by local motion detectors responsible for the change of the retinal size of the target. These results suggest that visually guided self-motion from optic flow induces world-centered criteria for estimates of 3D object motion.


IEEE Signal Processing Letters | 2015

Gabor Filter Based on Stochastic Computation

Naoya Onizawa; Daisaku Katagiri; Kazumichi Matsumiya; Warren J. Gross; Takahiro Hanyu

This letter introduces a design and proof-of-concept implementation of Gabor filters based on stochastic computation for area-efficient hardware. The Gabor filter exhibits a powerful image feature extraction capability, but it requires significant computational power. Using stochastic computation, a sine function used in the Gabor filter is approximated by exploiting several stochastic tanh functions designed based on a state machine. A stochastic Gabor filter realized using the stochastic sine shaper and a stochastic exponential function is simulated and compared with the original Gabor filter that shows almost equivalent behaviour at various frequencies and variance. A root-mean-square error of 0.043 at most is observed. In order to reduce long latency due to stochastic computation, 68 parallel stochastic Gabor filters are implemented in Silterra 0.13 μm CMOS technology. As a result, the proposed Gabor filters achieve a 78% area reduction compared with a conventional Gabor filter while maintaining the comparable speed.


Journal of Cognitive Neuroscience | 2012

Time courses of attentional modulation in neural amplification and synchronization measured with steady-state visual-evoked potentials

Yoshiyuki Kashiwase; Kazumichi Matsumiya; Ichiro Kuriki; Satoshi Shioiri

Endogenous attention modulates the amplitude and phase coherence of steady-state visual-evoked potentials (SSVEPs). In efforts to decipher the neural mechanisms of attentional modulation, we compared the time course of attentional modulation of SSVEP amplitude (thought to reflect the magnitude of neural population activity) and phase coherence (thought to reflect neural response synchronization). We presented two stimuli flickering at different frequencies in the left and right visual hemifields and asked observers to shift their attention to either stimulus. Our results demonstrated that attention increased SSVEP phase coherence earlier than it increased SSVEP amplitude, with a positive correlation between the attentional modulations of SSVEP phase coherence and amplitude. Furthermore, the behavioral dynamics of attention shifts were more closely associated with changes in phase coherence than with changes in amplitude. These results are consistent with the possibility that attention increases neural response synchronization, which in turn leads to increased neural population activity.


Journal of Vision | 2009

Motion mechanisms with different spatiotemporal characteristics identified by an MAE technique with superimposed gratings

Satoshi Shioiri; Kazumichi Matsumiya

We investigated spatiotemporal characteristics of motion mechanisms using a new type of motion aftereffect (MAE) we found. Our stimulus comprised two superimposed sinusoidal gratings with different spatial frequencies. After exposure to the moving stimulus, observers perceived the MAE in the static test in the direction opposite to that of the high spatial frequency grating even when low spatial frequency motion was perceived during adaptation. In contrast, in the flicker test, the MAE was perceived in the direction opposite to that of the low spatial frequency grating. These MAEs indicate that two different motion systems contribute to motion perception and can be isolated by using different test stimuli. Using a psychophysical technique based on the MAE, we investigated the differences between the two motion mechanisms. The results showed that the static MAE is the aftereffect of the motion system with a high spatial and low temporal frequency tuning (slow motion detector) and the flicker MAE is the aftereffect of the motion system with a low spatial and high temporal frequency tuning (fast motion detector). We also revealed that the two motion detectors differ in orientation tuning, temporal frequency tuning, and sensitivity to relative motion.


PLOS ONE | 2015

Eye-Head Coordination for Visual Cognitive Processing

Yu Fang; Ryoichi Nakashima; Kazumichi Matsumiya; Ichiro Kuriki; Satoshi Shioiri

We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.


Vision Research | 2003

The role of presaccadic compression of visual space in spatial remapping across saccadic eye movements

Kazumichi Matsumiya; Keiji Uchikawa

When multiple bars are briefly flashed near the saccadic goal on a visual reference just before a saccade, the total width of the multiple bars appears to be compressed toward the saccadic goal. We show that presaccadic compression of visual space is related to the attribution of the displacement of a visual stimulus to the displacement of another stimulus appearing after the saccade. Subjects observed a bar and a ruler. The bar was displaced during a saccade and the ruler disappeared briefly at the same time, and then the ruler reappeared at the same location after the saccade. The subjects had the impression that the bar appeared to remain stationary and the ruler appeared to be displaced after the saccade. This impression occurs strongly when the amount of the compression of visual space reaches the maximum at the saccade onset. Also, it occurs only at the saccadic goal in the same way as presaccadic compression of visual space. Saccadic suppression of displacement was equivalent at the saccadic goal and in the location opposite to the saccadic goal, indicating that the attribution of the bar displacement to the displacement of the ruler appearing after the saccade is not a consequence of saccadic suppression of displacement. Furthermore, performing a direction discrimination task showed that the bar appears stationary at the saccadic goal during compression of visual space even when the bar was actually displaced. We interpret these results as showing that presaccadic compression of visual space establishes the location of the saccadic goal (the bar) as a reference and then the location of the ruler is remapped relative to the reference location after the saccade, resulting in the illusory displacement of the ruler.


Psychological Science | 2013

Seeing a Haptically Explored Face Visual Facial-Expression Aftereffect From Haptic Adaptation to a Face

Kazumichi Matsumiya

Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.


Frontiers in Psychology | 2012

Implicit learning of viewpoint-independent spatial layouts.

Taiga Tsuchiai; Kazumichi Matsumiya; Ichiro Kuriki; Satoshi Shioiri

We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations.

Collaboration


Dive into the Kazumichi Matsumiya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirohiko Kaneko

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge