Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takehiro Nagai is active.

Publication


Featured researches published by Takehiro Nagai.


Vision Research | 2015

Temporal properties of material categorization and material rating: visual vs non-visual material features

Takehiro Nagai; Toshiki Matsushima; Kowa Koida; Yusuke Tani; Michiteru Kitazaki; Shigeki Nakauchi

Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all.


PLOS ONE | 2013

Enhancement of Glossiness Perception by Retinal-Image Motion: Additional Effect of Head-Yoked Motion Parallax

Yusuke Tani; Keisuke Araki; Takehiro Nagai; Kowa Koida; Shigeki Nakauchi; Michiteru Kitazaki

It has been argued that when an observer moves, a contingent retinal-image motion of a stimulus would strengthen the perceived glossiness. This would be attributed to the veridical perception of three-dimensional structure by motion parallax. However, it has not been investigated whether the effect of motion parallax is more than that of retinal-image motion of the stimulus. Using a magnitude estimation method, we examine in this paper whether cross-modal coordination of the stimulus change and the observers motion (i.e., motion parallax) is essential or the retinal-image motion alone is sufficient for enhancing the perceived glossiness. Our data show that a retinal-image motion simulating motion parallax without head motion strengthened the perceived glossiness but that its effect was weaker than that of motion parallax with head motion. These results suggest the existence of an additional effect of the cross-modal coordination between vision and proprioception on glossiness perception. That is, motion parallax enhances the perception of glossiness, in addition to retinal-image motions of specular surfaces.


PLOS ONE | 2014

Experts and Novices Use the Same Factors–But Differently–To Evaluate Pearl Quality

Yusuke Tani; Takehiro Nagai; Kowa Koida; Michiteru Kitazaki; Shigeki Nakauchi

Well-trained experts in pearl grading have been thought to evaluate pearls according to their glossiness, interference color, and shape. However, the characteristics of their evaluations are not fully understood. Using pearl grading experiments, we investigate the consistency of novice (i.e., without knowledge of pearl grading) and expert participants’ pearl grading skill and then compare the novices’ grading with that of experts; furthermore, we discuss the relationship between grading, interference color, and glossiness. We found that novices’ grading was significantly less concordant with experts average grading than was experts’ grading; more than half of novices graded pearls the opposite of how experts graded those same pearls. However, while experts graded pearls more consistently than novices did, novices’ consistency was relatively high. We also found differences between the groups in regression analyses that used interference color and glossiness as explanatory variables and were conducted for each trial. Although the regression coefficient was significant in 60% of novices’ trials, there were fewer significant trials for the experts (20%). This indicates that novices can also make use of these two factors, but that their usage is simpler than that of the experts. These results suggest that experts and novices share some values about pearls but that the evaluation method is elaborated for experts.


I-perception | 2013

Image regions contributing to perceptual translucency: A psychophysical reverse-correlation study.

Takehiro Nagai; Yuki Ono; Yusuke Tani; Kowa Koida; Michiteru Kitazaki; Shigeki Nakauchi

The spatial luminance relationship between shading patterns and specular highlight is suggested to be a cue for perceptual translucency (Motoyoshi, 2010). Although local image features are also important for translucency perception (Fleming & Bulthoff, 2005), they have rarely been investigated. Here, we aimed to extract spatial regions related to translucency perception from computer graphics (CG) images of objects using a psychophysical reverse-correlation method. From many trials in which the observer compared the perceptual translucency of two CG images, we obtained translucency-related patterns showing which image regions were related to perceptual translucency judgments. An analysis of the luminance statistics calculated within these image regions showed that (1) the global rms contrast within an entire CG image was not related to perceptual translucency and (2) the local mean luminance of specific image regions within the CG images correlated well with perceptual translucency. However, the image regions contributing to perceptual translucency differed greatly between observers. These results suggest that perceptual translucency does not rely on global luminance statistics such as global rms contrast, but rather depends on local image features within specific image regions. There may be some “hot spots” effective for perceptual translucency, although which of many hot spots are used in judging translucency may be observer dependent.


Journal of The Optical Society of America A-optics Image Science and Vision | 2016

Dissociation of equilibrium points for color-discrimination and color-appearance mechanisms in incomplete chromatic adaptation

Tomoharu Sato; Takehiro Nagai; Ichiro Kuriki; Shigeki Nakauchi

We compared the color-discrimination thresholds and supra-threshold color differences (STCDs) obtained in complete chromatic adaptation (gray) and incomplete chromatic adaptation (red). The color-difference profiles were examined by evaluating the perceptual distances between various color pairs using maximum likelihood difference scaling. In the gray condition, the chromaticities corresponding with the smallest threshold and the largest color difference were almost identical. In contrast, in the red condition, they were dissociated. The peaks of the sensitivity functions derived from the color-discrimination thresholds and STCDs along the L-M axis were systematically different between the adaptation conditions. These results suggest that the color signals involved in color discrimination and STCD tasks are controlled by separate mechanisms with different characteristic properties.


Journal of Vision | 2015

Perception of a thick transparent object is affected by object and background motions but not dependent on the motion speed

Shohei Ueda; Yusuke Tani; Takehiro Nagai; Kowa Koida; Shigeki Nakauchi; Michiteru Kitazaki

While a thick transparent object such as glass ball is in front of a textured background, the texture is perceived as distorted through the object. This distortion field is a cue for judging refractive index of transparent objects (Fleming, Jäkel, & Maloney, 2011). We aimed to investigate effects of the dynamic distortion field on judging the refractive index of transparent objects. A test and a matching stimulus were presented adjacently on a CRT monitor, and subjects were asked to adjust the refractive index of the matching stimulus to make its material identical to the test stimulus. In Experiment 1, the test stimulus was randomly chosen from stimuli of five refractive indices (1.3-1.7), and either rotated around the vertical axis or remained stationary. The matching stimulus was always stationary and subjects could change its refractive index. We found that the perceived refractive index was higher with the moving object than the static object. In Experiment 2, the background texture of the test stimulus moved horizontally back and forth instead of object motion, and we found that the background motion also increased the perceived refractive index. We varied the background-motion speed (38 to 300% of Experiment 2) in Experiment 3. We confirmed the overestimation effect of the moving background, and we found that the degree of the overestimation was independent of the motion speed. In conclusions, the dynamic distortion field caused by object- or background-motion raised the perceived refractive index, but its motion speed did not matter. Thus the results cannot be explained by overestimation of distortion field by its dynamic change. Rather the dynamic distortion field may make the perceived rigidity of the object more than the static distortion field, and induce the overestimation of refractive index since the rigid materials generally have higher refractive index than the non-rigid materials. Meeting abstract presented at VSS 2015.


Journal of Vision | 2009

Different hue coding underlying figure segregation and region detection tasks.

Takehiro Nagai; Keiji Uchikawa

Figure segregation from its background is one of the important functions of color vision for our visual system because it is a preliminary to shape recognition. However, little is known about the chromatic mechanisms underlying figure segregation as opposed to those underlying mere color discrimination and detection. We investigated whether there are differences in color difference thresholds between a shape discrimination task (involving figure segregation) and a simple region detection task. In the shape discrimination task the observer discriminated the shapes of two figures, which could be segregated from their background on the basis of a color direction (hue) difference. In the region detection task the observer simply detected a square region against its background. Thresholds of color direction differences from a range of background color directions were measured for each task. In addition, we added saturation variation in one condition to investigate the involvement of the cone-opponent channels in those tasks. First, the results showed that the saturation variation increased the thresholds evenly for all background color directions. This suggests that higher-order color mechanisms rather than the early cone-opponent mechanisms are involved in both of the two tasks. Second, the shapes of the background color direction-threshold functions were different between the two tasks and these shape differences were consistent across all observers. This finding suggests that hue information may be encoded differently for shape discrimination and region detection. Furthermore, differences in spatial frequency components and in the requirement for orientation extraction rarely affected the shapes of the threshold functions in additional experiments, suggesting the possibility that hue encoding for shape discrimination differs from encoding for region detection at a late stage of form processing where local orientation signals are globally integrated.


Journal of The Optical Society of America A-optics Image Science and Vision | 2008

Characteristics of grouping colors for figure segregation on a multicolored background

Takehiro Nagai; Keiji Uchikawa

A figure is segregated from its background when the colored elements belonging to the figure are grouped together. We investigated the range of color distribution conditions in which a figure could be segregated from its background using the color distribution differences. The stimulus was a multicolored texture composed of randomly shaped pieces. It was divided into two regions: a test region and a background region. The pieces in these two regions had different color distributions in the OSA Uniform Color Space. In our experiments, the subject segregated the figure of the test region using two different procedures. Since the Euclidean distance in the OSA Uniform Color Space corresponds to perceived color difference, if segregation thresholds are determined by only color difference, the thresholds should be independent of position and direction in the color space. In the results, however, the thresholds did depend on position and direction in the OSA Uniform Color Space. This suggests that color difference is not the only factor in figure segregation by color. Moreover, the threshold dependence on position and direction is influenced by the distances in the cone-opponent space whose axes are normalized by discrimination thresholds, suggesting that figure segregation threshold is determined by similar factors in the cone-opponent space for color discrimination. The analysis of the results by categorical color naming suggests that categorical color perception may affect figure segregation only slightly.


Journal of The Optical Society of America A-optics Image Science and Vision | 2016

Contrast adaptation to luminance and brightness modulations.

Takehiro Nagai; Kazuki Nakayama; Yuki Kawashima; Yasuki Yamauchi

Perceptual brightness and color contrast decrease after seeing a light temporally modulating along a certain direction in a color space, a phenomenon known as contrast adaptation. We investigated whether contrast adaptation along the luminance direction arises from modulation of luminance signals or apparent brightness signals. The stimulus consisted of two circles on a gray background presented on a CRT monitor. In the adaptation phase, the luminance and chromaticity of one circle were temporally modulated, while the other circle was kept at a constant luminance and color metameric with an equal-energy white. We employed two types of temporal modulations, namely, in luminance and brightness. Chromaticity was sinusoidally modulated along the L-M axis, leading to dissociation between luminance and brightness (the Helmholtz-Kohlrausch effect). In addition, luminance modulation was minimized in the brightness modulation, while brightness modulation was minimized in the luminance modulation. In the test phase, an asymmetric matching method was used to measure the magnitude of contrast adaptation for both modulations. Our results showed that, although contrast adaptation along the luminance direction occurred for both modulations, contrast adaptation for luminance modulation was significantly stronger than that for the brightness modulation regardless of the temporal frequency of the adaptation modulation. These results suggest that luminance modulation is more influential in contrast adaptation than brightness modulation.


Journal of Vision | 2011

Spatiotemporal averaging of perceived brightness along an apparent motion trajectory

Takehiro Nagai; R. D. Beer; Erin Krizay; Donald I. A. MacLeod

Collaboration


Dive into the Takehiro Nagai's collaboration.

Top Co-Authors

Avatar

Shigeki Nakauchi

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuki Kawashima

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kowa Koida

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michiteru Kitazaki

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yusuke Tani

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Keiji Uchikawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomoharu Sato

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryo Nishijima

Toyohashi University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge