Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazushi Maruya is active.

Publication


Featured researches published by Kazushi Maruya.


Vision Research | 2015

Seeing liquids from visual motion

Takahiro Kawabe; Kazushi Maruya; Roland W. Fleming; Shin'ya Nishida

Most research on human visual recognition focuses on solid objects, whose identity is defined primarily by shape. In daily life, however, we often encounter materials that have no specific form, including liquids whose shape changes dynamically over time. Here we show that human observers can recognize liquids and their viscosities solely from image motion information. Using a two-dimensional array of noise patches, we presented observers with motion vector fields derived from diverse computer rendered scenes of liquid flow. Our observers perceived liquid-like materials in the noise-based motion fields, and could judge the simulated viscosity with surprising accuracy, given total absence of non-motion information including form. We find that the critical feature for apparent liquid viscosity is local motion speed, whereas for the impression of liquidness, image statistics related to spatial smoothness-including the mean discrete Laplacian of motion vectors-is important. Our results show the brain exploits a wide range of motion statistics to identify non-solid materials.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Perceptual transparency from image deformation

Takahiro Kawabe; Kazushi Maruya; Shin'ya Nishida

Significance The perception of liquids, particularly water, is a vital sensory function for survival, but little is known about the visual perception of transparent liquids. Here we show that human vision has excellent ability to perceive a transparent liquid solely from dynamic image deformation. No other known image cues are needed for the perception of transparent surfaces. Static deformation is not effective for perceiving transparent liquids. Human vision interprets dynamic image deformation as caused by light refraction at the moving liquid’s surface. Transparent liquid is well perceived from artificial image deformations, which share only basic flow features with image deformations caused by physically correct light refraction. Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid’s surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of “invisible” transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.


Journal of Neurophysiology | 2012

Human neural responses involved in spatial pooling of locally ambiguous motion signals

Kaoru Amano; Tsunehiro Takeda; Tomoki Haji; Masahiko Terao; Kazushi Maruya; Kenji Matsumoto; Ikuya Murakami; Shin'ya Nishida

Early visual motion signals are local and one-dimensional (1-D). For specification of global two-dimensional (2-D) motion vectors, the visual system should appropriately integrate these signals across orientation and space. Previous neurophysiological studies have suggested that this integration process consists of two computational steps (estimation of local 2-D motion vectors, followed by their spatial pooling), both being identified in the area MT. Psychophysical findings, however, suggest that under certain stimulus conditions, the human visual system can also compute mathematically correct global motion vectors from direct pooling of spatially distributed 1-D motion signals. To study the neural mechanisms responsible for this novel 1-D motion pooling, we conducted human magnetoencephalography (MEG) and functional MRI experiments using a global motion stimulus comprising multiple moving Gabors (global-Gabor motion). In the first experiment, we measured MEG and blood oxygen level-dependent responses while changing motion coherence of global-Gabor motion. In the second experiment, we investigated cortical responses correlated with direction-selective adaptation to the global 2-D motion, not to local 1-D motions. We found that human MT complex (hMT+) responses show both coherence dependency and direction selectivity to global motion based on 1-D pooling. The results provide the first evidence that hMT+ is the locus of 1-D motion pooling, as well as that of conventional 2-D motion pooling.


Vision Research | 2003

Reversed-phi perception with motion-defined motion stimuli

Kazushi Maruya; Yosuke Mugishima; Takao Sato

Perception of reversed-phi with motion-defined motion (MDM) stimuli was examined while varying various parameters including eccentricity. For peripheral viewing, reversed-phi was observed at all displacements between 30 degrees and 135 degrees. The perception most prominent at 90 degrees, but was disrupted by dichoptic presentation. These results suggest operations of an energy-based motion system similar to the first-order motion system for luminance motion, which most likely resides at a relatively early level (cf. [Vision Res. 33 (1993) 533]). For central viewing, reversed motion was observed only for larger displacements. The perceived motion at smaller displacements was predominantly in the forward direction. Transition between the two modes occurred around 90 degrees displacement. In addition, this motion perception was not disrupted by dichoptic presentation. This indicated the operation of a polarity independent matching-based motion system residing at a higher-level. Thus, the results indicate the involvement of at least two separate mechanisms for MDM detection, and that there is a dominance shift between the two systems according to the eccentricity.


Vision Research | 2010

Conditional spatial-frequency selective pooling of one-dimensional motion signals into global two-dimensional motion

Kazushi Maruya; Kaoru Amano; Shin'ya Nishida

This study examined spatial-frequency effects on a motion-pooling process in which spatially distributed local one-dimensional motion signals are integrated into the perception of global two-dimensional motion. Motion pooling over two- to three-octave frequency differences was found to be nearly impossible when all Gabor elements had circular envelopes, but possible when the width of high-frequency elements was reduced, and the stimulus as a whole formed a closed contour configuration. These results are consistent with a view that motion pooling is controlled by form information, and that spatial-frequency difference is one, but not an absolute, form cue of segmentation.


Journal of Vision | 2013

Temporal characteristics of depth perception from motion parallax

Kenchi Hosokawa; Kazushi Maruya; Takao Sato

Temporal characteristics of depth perception from motion parallax were examined by modulating parallax intermittently while observers moved their head side to side. In Experiment 1, parallax of a fixed value was introduced only for the central 1/6 to 5/6 portion of each component head movement. It was found that the perceived depth was proportional to the temporal average of parallax-specified depth. In addition, observers did not notice any abrupt temporal change of depth. In Experiment 2, parallax was increased or decreased once per trial either at the center or the end of one of the component head movements, and observers judged the direction of depth change. Again, observers did not notice any abrupt change of depth. The percentage of correct responses was almost constant for large change amplitudes. Reaction times to the change were over 1 s even for the largest changes, and it increased for smaller change amplitudes. These results indicate that the mechanism for depth from parallax has a configuration similar to that proposed for structure from motion, and that it involves a temporal integration process with a relatively long time-constant.


Journal of Vision | 2011

Spatial pooling of one-dimensional second-order motion signals

Kazushi Maruya; Shin'ya Nishida

We can detect visual movements not only from luminance motion signals (first-order motion) but also from non-luminance motion signals (second-order motion). It has been established for first-order motions that the visual system pools local one-dimensional motion signals across space and orientation to solve the aperture problem and to estimate two-dimensional object motion. In this study, we investigated (i) whether local one-dimensional second-order motion signals are also pooled across space and orientation into a global 2D motion, and if so, (ii) whether the second-order motion signals are pooled independently of, or in cooperation with, first-order motion signals. We measured the direction-discrimination performance and the rating of a global circular translation of four oscillating bars, each defined either by luminance or by a non-luminance attribute, such as flicker and binocular depth. The results showed evidence of motion pooling both when the stimulus consisted only of second-order bars and when it consisted of first-order and second-order bars. We observed global motion pooling across first-order motion and second-order motions even when the first-order motion was not accompanied by trackable position changes. These results suggest the presence of a universal pooling system for first- and second-order one-dimensional motion signals.


Frontiers in Psychology | 2017

Reading Traits for Dynamically Presented Texts: Comparison of the Optimum Reading Rates of Dynamic Text Presentation and the Reading Rates of Static Text Presentation

Miki Uetsuki; Junji Watanabe; Hideyuki Ando; Kazushi Maruya

With the growth in digital display technologies, dynamic text presentation is used widely in every day life, such as in electric advertisements and tickers on TV programs. Unlike static text reading, little is known about the basic characteristics underlying reading dynamically presented texts. Two experiments were performed to investigate this. Experiment 1 examined the optimum rate of dynamic text presentation in terms of a readability and favorability. This experiment demonstrated that, when the rate of text presentation was changed, there was an optimum presentation rate (around 6 letters/s in our condition) regardless of difficulty level. This indicates that the presentation rate of dynamic texts can affect the impression of reading. In Experiment 2, to elucidate the traits underlying dynamic text reading, we measured the reading speeds of silent and trace reading among the same participants and compared them with the optimum presentation rate obtained in Experiment 1. The results showed that the optimum rate was slower than with silent reading and faster than with trace reading, and, interestingly, the individual optimum rates of dynamic text presentation were correlated with the speeds of both silent and trace reading. In other words, the readers who preferred a fast rate in dynamic text presentation would also have a high reading speed for silent and trace reading.


Journal of Vision | 2011

Luminance-color interactions in surface gloss perception

Shin'ya Nishida; Isamu Motoyoshi; Kazushi Maruya


Journal of Vision | 2013

Rapid encoding of relationships between spatially remote motion signals.

Kazushi Maruya; Alex O. Holcombe; Shin'ya Nishida

Collaboration


Dive into the Kazushi Maruya's collaboration.

Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Takao Sato

Jikei University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junji Watanabe

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge