Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takahiro Kawabe is active.

Publication


Featured researches published by Takahiro Kawabe.


PLOS ONE | 2014

Colour-temperature correspondences: when reactions to thermal stimuli are influenced by colour.

Hsin-Ni Ho; George Van Doorn; Takahiro Kawabe; Junji Watanabe; Charles Spence

In our daily lives, information concerning temperature is often provided by means of colour cues, with red typically being associated with warm/hot, and blue with cold. While such correspondences have been known about for many years, they have primarily been studied using subjective report measures. Here we examined this correspondence using two more objective response measures. First, we used the Implicit Association Test (IAT), a test designed to assess the strength of automatic associations between different concepts in a given individual. Second, we used a priming task that involved speeded target discrimination in order to assess whether priming colour or thermal information could invoke the crossmodal association. The results of the IAT confirmed that the association exists at the level of response selection, thus indicating that a participant’s responses to colour or thermal stimuli are influenced by the colour-temperature correspondence. The results of the priming experiment revealed that priming a colour affected thermal discrimination reaction times (RTs), but thermal cues did not influence colour discrimination responses. These results may therefore provide important clues as to the level of processing at which such colour-temperature correspondences are represented.


Consciousness and Cognition | 2011

Emotion colors time perception unconsciously

Yuki Yamada; Takahiro Kawabe

Emotion modulates our time perception. So far, the relationship between emotion and time perception has been examined with visible emotional stimuli. The present study investigated whether invisible emotional stimuli affected time perception. Using continuous flash suppression, which is a kind of dynamic interocular masking, supra-threshold emotional pictures were masked or unmasked depending on whether the retinal position of continuous flashes on one eye was consistent with that of the pictures on the other eye. Observers were asked to reproduce the perceived duration of a frame stimulus that was concurrently presented with a masked or unmasked emotional picture. As a result, negative emotional stimuli elongated the perceived duration of the frame stimulus in comparison with positive and neutral emotional stimuli, regardless of the visibility of emotional pictures. These results suggest that negative emotion unconsciously accelerates an internal clock, altering time perception.


Journal of Vision | 2008

Spatiotemporal feature attribution for the perception of visual size.

Takahiro Kawabe

This study examined the role of spatiotemporal feature attribution in the perception of the visual size of objects. A small or a large leading disk, a test disk of variable size, and a probe disk of a fixed size were sequentially presented at the same position for durations of 16.7 ms with interstimulus intervals of 117 ms. Observers compared the visual size of the test with the probe disk. The size of the test disk was underestimated and overestimated when the test followed small and large leading disks, respectively (Experiment 1). These modulations of visual size occurred even when disks were sequentially presented so as to invoke apparent motion (Experiment 2). Furthermore, when two streams of apparent motion consisting of the three types of disk were diagonally overlapped, modulation of visual size occurred in accordance with the size of the attended leading disk (Experiment 3). Retinotopic and non-retinotopic feature attribution and the related attentional mechanisms are discussed.


Vision Research | 2015

Seeing liquids from visual motion

Takahiro Kawabe; Kazushi Maruya; Roland W. Fleming; Shin'ya Nishida

Most research on human visual recognition focuses on solid objects, whose identity is defined primarily by shape. In daily life, however, we often encounter materials that have no specific form, including liquids whose shape changes dynamically over time. Here we show that human observers can recognize liquids and their viscosities solely from image motion information. Using a two-dimensional array of noise patches, we presented observers with motion vector fields derived from diverse computer rendered scenes of liquid flow. Our observers perceived liquid-like materials in the noise-based motion fields, and could judge the simulated viscosity with surprising accuracy, given total absence of non-motion information including form. We find that the critical feature for apparent liquid viscosity is local motion speed, whereas for the impression of liquidness, image statistics related to spatial smoothness-including the mean discrete Laplacian of motion vectors-is important. Our results show the brain exploits a wide range of motion statistics to identify non-solid materials.


Journal of Vision | 2013

Direction of visual apparent motion driven by perceptual organization of cross-modal signals

Warrick Roseboom; Takahiro Kawabe; Shin'ya Nishida

A critical function of the human brain is to determine the relationship between sensory signals. In the case of signals originating from different sensory modalities, such as audition and vision, several processes have been proposed that may facilitate perception of correspondence between two signals despite any temporal discrepancies in physical or neural transmission. One proposal, temporal ventriloquism, suggests that audio-visual temporal discrepancies can be resolved with a capture of visual event timing by that of nearby auditory events. Such an account implies a fundamental change in the timing representations of the involved events. Here we examine if such changes are necessary to account for a recently demonstrated effect, the modulation of visual apparent motion direction by audition. By contrast, we propose that the effect is driven by segmentation of the visual sequence on the basis of perceptual organization in the cross-modal sequence. Using different sequences of cross-modal (auditory and tactile) events, we found that the direction of visual apparent motion was not consistent with a temporal capture explanation. Rather, reports of visual apparent motion direction were dictated by perceptual organization within cross-modal sequences, determined on the basis of apparent relatedness. This result adds to the growing literature indicating the importance of apparent relatedness and sequence segmentation in apparent timing. Moreover, it demonstrates that, contrary to previous findings, cross-modal interaction can play a critical role in determining organization of signals within a single sensory modality.


Scientific Reports | 2013

The cross-modal double flash illusion depends on featural similarity between cross-modal inducers

Warrick Roseboom; Takahiro Kawabe; Shin'ya Nishida

Despite extensive evidence of the possible interactions between multisensory signals, it remains unclear at what level of sensory processing these interactions take place. When two identical auditory beeps (inducers) are presented in quick succession accompanied by a single visual flash, observers often report seeing two visual flashes, rather than the physical one - the double flash illusion. This compelling illusion has often been considered to reflect direct interactions between neural activations in different primary sensory cortices. Against this simple account, here we show that by simply changing the inducer signals between featurally distinct signals (e.g. high- and low-pitch beeps) the illusory double flash is abolished. This result suggests that a critical component underlying the illusion is perceptual grouping of the inducer signals, consistent with the notion that multisensory combination is preceded by determination of whether the relevant signals share a common source of origin.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Perceptual transparency from image deformation

Takahiro Kawabe; Kazushi Maruya; Shin'ya Nishida

Significance The perception of liquids, particularly water, is a vital sensory function for survival, but little is known about the visual perception of transparent liquids. Here we show that human vision has excellent ability to perceive a transparent liquid solely from dynamic image deformation. No other known image cues are needed for the perception of transparent surfaces. Static deformation is not effective for perceiving transparent liquids. Human vision interprets dynamic image deformation as caused by light refraction at the moving liquid’s surface. Transparent liquid is well perceived from artificial image deformations, which share only basic flow features with image deformations caused by physically correct light refraction. Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid’s surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of “invisible” transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.


Frontiers in Psychology | 2013

Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

Warrick Roseboom; Takahiro Kawabe; Shin'ya Nishida

It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.


Bioscience, Biotechnology, and Biochemistry | 2008

Essential oil of lavender inhibited the decreased attention during a long-term task in humans.

Kuniyoshi Shimizu; Mayumi Gyokusen; Shingo Kitamura; Takahiro Kawabe; Tomoaki Kozaki; Keita Ishibashi; Ryusuke Izumi; Wataru Mizunoya; Koichiro Ohnuki; Ryuichiro Kondo

This study examined the effects of odors on sustained attention during a vigilance task. Two essential oils (lavender and eucalyptus) and two materials (l-menthol and linalyl acetate) were compared with a control. The increase in reaction time was significantly lower with lavender than with the control. The results suggest that the administration of lavender helped to maintain sustained attention during the long-term task.


Vision Research | 2007

Subjective disappearance of a target by flickering flankers.

Takahiro Kawabe; Kayo Miura

This study examined the subjective disappearance of a visual object induced by a neighboring flickering ring (Experiments 1 and 2), a set of four flickering dots (Experiment 3), and apparent motion (Experiment 4) as flickering flankers. Observers were asked to report whether a target disappeared during 10 s of stimulus presentation. We used the proportion of disappearance as a measure of performance. Interestingly, subjective disappearance was rarely observed when flickering flankers were presented with a separation of less than 0.5 degrees from the target. However, disappearance was observed when dynamic random-dot patterns were presented with a separation of less than 0.5 degrees from the target border (Experiment 5). Our results indicate that the flicker of flankers near the target disturbs target adaptation or attentional inhibition, causing persistent target representation in higher-order object selection, and resulting in non-disappearance of the target.

Collaboration


Dive into the Takahiro Kawabe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masataka Sawayama

Japan Society for the Promotion of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge