Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masataka Sawayama is active.

Publication


Featured researches published by Masataka Sawayama.


tests and proofs | 2016

Deformation Lamps: A Projection Technique to Make Static Objects Perceptually Dynamic

Takahiro Kawabe; Taiki Fukiage; Masataka Sawayama; Shin'ya Nishida

Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observers brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an objects appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies.


I-perception | 2010

Local computation of lightness on articulated surrounds

Masataka Sawayama; Eiji Kimura

Lightness of a grey target on a uniform light (or dark) surround changes by articulating the surround (articulation effect). To elucidate the processing of lightness underlying the articulation effect, the present study introduced transparency over a dark surround and investigated its effects on lightness of the target. The transparency was produced by adding a contiguous external field to the dark surround while keeping local stimulus configuration constant. Results showed that the target lightness did not change on the articulated surround when a dark transparent filter was perceived over the target, although it did on the uniform surround. These results suggest that image decomposition into a transparent filter and an underlying surface does not necessarily change lightness of the surface if the surface is articulated. Moreover, the present study revealed that articulating the surround does not always enhance lightness contrast; it can reduce the contrast effect when the target luminance is not the highest within the surround. These findings are consistent with the theoretical view that lightness perception on articulated surfaces is determined locally within a spatially limited region, and they also place a constraint on how the luminance distribution within the limited region is scaled.


PLOS Computational Biology | 2018

Material and shape perception based on two types of intensity gradient information

Masataka Sawayama; Shin'ya Nishida

Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world.


Journal of Vision | 2017

Human perception of subresolution fineness of dense textures based on image intensity statistics

Masataka Sawayama; Shin'ya Nishida; Mikio Shinya

We are surrounded by many textures with fine dense structures, such as human hair and fabrics, whose individual elements are often finer than the spatial resolution limit of the visual system or that of a digitized image. Here we show that human observers have an ability to visually estimate subresolution fineness of those textures. We carried out a psychophysical experiment to show that observers could correctly discriminate differences in the fineness of hair-like dense line textures even when the thinnest line element was much finer than the resolution limit of the eye or that of the display. The physical image analysis of the textures, along with a theoretical analysis based on the central limit theorem, indicates that as the fineness of texture increases and the number of texture elements per resolvable unit increases, the intensity contrast of the texture decreases and the intensity histogram approaches a Gaussian shape. Subsequent psychophysical experiments showed that these image features indeed play critical roles in fineness perception; i.e., lowering the contrast made artificial and natural textures look finer, and this effect was most evident for textures with unimodal Gaussian-like intensity distributions. These findings indicate that the human visual system is able to estimate subresolution texture fineness on the basis of diagnostic image features correlated with subresolution fineness, such as the intensity contrast and the shape of the intensity histogram.


Journal of Vision | 2017

Visual wetness perception based on image color statistics

Masataka Sawayama; Edward H. Adelson; Shin'ya Nishida

Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.


Journal of Vision | 2015

Visual perception of surface wetness

Masataka Sawayama; Shin'ya Nishida

We can visually recognize a variety of surface state. Just a quick look is sufficient to find the bath floor is dry, the road in front is slippery, the window glass is frosty, or the ornament is dusty. If a given surface state perception relies on the analysis of diagnostic image features, an effective strategy to reveal those features and the associated visual processing is to find a stimulus transformation that alters the apparent surface state. Here we report an image transformation that makes dry objects look wet. This wet filter consists of two operations: (1) Tone-remapping with an accelerating nonlinear function that renders the intensity histogram positively skewed: (2) color saturation enhancement. In an experimental test, we applied the wet filter to a variety of natural textures of the McGill Calibrated Colour Image Database. The results of a wetness rating experiment showed that the wet-filtered images were perceived as wetter than the original images. In addition, the perceived wetness depended on the variance of hue. The wet-filter was less effective for images with a small variance of hue. Optically, wetting a surface tends to increases the specular reflection. In addition, as the incoming light scatters repeatedly within the surface liquid layer, the light going out from the surface tends to be darker and more saturated. The effects of these optical changes can be simulated by the two wet-filter operations. However, positively skewed luminance histogram and high chromatic saturation may be caused by other factors - for instance, the visual scene may happen to include highly saturated glossy objects. This is presumably why hue variation matters. If the same image transformation simultaneously occurs in many different objects, the brain infers that the change likely has the same cause, such as water shower in the present case. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Material-dependent shape distortion by local intensity order reversal.

Shin'ya Nishida; Masataka Sawayama; Takeaki Shimokawa

The visual image of an object is formed by a complex interaction among the material (reflectance), geometry (shape) and lighting of the object. How, and how well, the visual system recovers these image-formation components from the resultant image is a longstanding problem in vision science. With constant shape and lighting, material changes drastically alter the intensity distribution of the object image. However, as long as these changes can be ascribed to variations in parameters for diffuse and specular reflectance (e.g., addition of highlights), they had only minor effects on the local intensity orders (Sawayama & Nishida, VSS2014). This observation led us to a hypothesis that the human shape-from-shading processing may be sensitive to the local intensity order information, while not to the steepness of intensity gradient to which material processing is very sensitive. To test this hypothesis, we examined human shape-from-shading perception for local-gradient-modulated images that shared the local intensity order map, but not the gradient magnitude map, with the original matte object images. Specifically, we randomly modulated the steepness of the intensity gradient between adjacent iso-intensity contours of the image. In the experiment, observers adjusted the tilt/slant of a gauge probe with the apparent surface normal direction. We found that the perceived shapes of the local gradient-modulated images were similar to those of the original images. We also examined the shape perception for objects with asperity scattering, a class of reflectance uncovered by diffuse/specular models, producing the appearances of velvet and peach. As compared to the matte image, the asperity scattering distorted the perceived shape when it caused local reversals of intensity order, while not when the asperity scattering completely reversed the intensity order map. These findings support the hypothesis that human shape-from-shading relies dominantly on the local intensity order information. Meeting abstract presented at VSS 2015.


Journal of Vision | 2013

Spatial organization affects lightness perception on articulated surrounds

Masataka Sawayama; Eiji Kimura

The articulation effect refers to a change in lightness contrast induced by adding small patches of different luminances to a uniform background surrounding a target in a lightness contrast display. This study investigated how local luminance signals are integrated to generate the articulation effect. We asked whether spatial organization due to perceptual grouping can influence the articulation effect even when the spatially averaged luminance of the surrounds is held constant. Grouping factors used were common-fate motion (Experiment 1), similarity of orientation (Experiment 2), and synchrony (Experiment 3). Results of all experiments consistently showed that the articulation effect was larger when the target was strongly grouped with the articulation patches. These findings provide converging evidence for the effects of spatial organization on the articulation effect. Moreover, they suggest that lightness computation underlying the articulation effect depends on a middle-level representation in which perceptual organization is at least partially established. The changes in lightness perception due to spatial organization could be accounted for by the double-anchoring theory of lightness (Bressan, 2006b).


international conference on human haptic sensing and touch enabled computer applications | 2018

Haptic Texture Perception on 3D-Printed Surfaces Transcribed from Visual Natural Textures

Scinob Kuroki; Masataka Sawayama; Shin'ya Nishida

Humans have a sophisticated ability to discriminate surface textures by touch, which is valuable for discriminating materials. Conventional studies have investigated this ability by using stimuli with simple (lower-order) statistical structures. Nevertheless, the structure of natural textures can be much more complex, and the human brain can encode complex (higher-order) spatial structures at least when they are processed by the visual system. To see how much the tactile system can encode complex surface patterns, we 3D-printed textured surfaces based on visual images of natural scenes including leaves and stones and conducted a haptic texture discrimination experiment. The mean surface carving depths were equated among the patterns. The participants touched the patterns in three modes: passive scan, static touch, and vibration only. The results showed that the “photo” patterns, which were visually very different from one another, were nearly indiscriminable by touch regardless of the touching mode. This suggests that though human touch may be good at discriminating differences in simple spatial structures such as statistics about the amplitude spectrum, it is relatively insensitive to more complex spatial structures, possibly due to spatial and temporal summation of local signals. Although further investigation is necessary to fully understand spatial statistics relevant to tactile texture perception, directly comparing touch with vision by using the 3D printing technology is a promising research strategy.


Annual Review of Vision Science | 2018

Motion Perception: From Detection to Interpretation

Shin'ya Nishida; Takahiro Kawabe; Masataka Sawayama; Taiki Fukiage

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.

Collaboration


Dive into the Masataka Sawayama's collaboration.

Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ken Goryo

Kyoto Women's University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Hayashi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Scinob Kuroki

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge