Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shoko Kanaya is active.

Publication


Featured researches published by Shoko Kanaya.


PLOS ONE | 2012

Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

Shoko Kanaya; Yuka Matsushima; Kazuhiko Yokosawa

Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as ones own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.


Psychonomic Bulletin & Review | 2011

Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.

Shoko Kanaya; Kazuhiko Yokosawa

Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants’ auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants’ auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.


Perception | 2016

Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood

Shoko Kanaya; Kenji Kariya; Waka Fujisaki

Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition–touch comparison, and for two of the three properties regarding in the vision–touch comparison. By contrast, no properties exhibited significant positive correlations in the vision–audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved.


Perception | 2015

Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task.

Shoko Kanaya; Waka Fujisaki; Shin'ya Nishida; Shigeto Furukawa; Kazuhiko Yokosawa

Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants.


Journal of Vision | 2015

The correspondence between neutral voice and face is mediated by common perceptual properties

Shoko Kanaya; Yoshiyuki Ueda; Hideyuki Tochiya; Kazuhiko Yokosawa

We can infer a voice of an unfamiliar person from his/her face. One hypothesis attributes this to common information conveyed by voices and faces. However, the nature of this critical information remains unclear (Mavica & Barenholz, 2013). Some recent reports have shown that auditory and visual stimuli, which do not have any direct relations (e.g. music - color, timbre - visual texture), nonetheless appeared to be related based on certain perceptual and emotional properties (Palmer et al, 2013; Peterson et al., 2014). The present study examined whether supra-modal information, such as perceptual and emotional properties and personality traits can mediate such inferential links between voice and face. In this experiment, voices conveyed neutrally spoken sentences and faces were neutral visual pictures of male/female faces. To investigate indirect relationships between voice and face, models of pictures were different people than the speakers. In the first task, one voice was presented simultaneously with multiple faces, and participants had to select the first, second and third faces that corresponded to the presented voice. In a second, then a third task, the voice and face were independently presented along with 18 pairs of bipolar adjectives. Using an eight point scale, participants rated the likelihood of a voice or face matching a given pole. Adjectives described perceptual properties (e.g. smooth - rough), emotional properties (e.g. happy - sad), and personality traits (e.g. passive - dominant). For each adjective pair, a weighted average of ratings of faces selected as corresponding to each voice was calculated. Results showed that weighted averages were strongly correlated with ratings for the voice itself, especially for certain perceptual adjectives (e.g. r = .74, glossy - matte). This suggests that the correspondence between neutral voices and faces are mediated mainly by perceptual properties. Meeting abstract presented at VSS 2015.


Journal of Vision | 2011

Self-produced stimulation can elicit rubber hand illusion

Kazuhiko Yokosawa; Shoko Kanaya; Takahiro Ishiwata

10. I felt as if I were touching the hand on the display. –The RHI is known to reflect the role of multi-sensory interaction in coherent body representation. It has mainly been studied using tactile stimuli provided by an experimenter. – In the control condition, a participant repeatedly touched his/her invisible real left hand, again using the haptic device without a visible pointer. – In the self-produced-touch condition, a participant moved his/her right hand with the haptic device to repeatedly touch his/her invisible real left hand. A virtual left hand and visual cues were visible on the CRT display and the pointer of the haptic device touched this hand in perfect synchrony with the participants right hand motions. – In the externally-produced-touch condition, an experimenter repeatedly touched a participants invisible real left hand, using the haptic device with visual cues on the CRT display. – The RHI was induced in three six-minute sessions. After each session, the difference of the perceptual position of the left hand from that before was measured, and the participant completed a questionnaire. Average Ratings


I-perception | 2011

Syllable Congruency of Audio-Visual Speech Stimuli Facilitates the Spatial Ventriloquism Only with Bilateral Visual Presentations:

Shoko Kanaya; Kazuhiko Yokosawa

Spatial ventriloquism refers to a shift of perceptual location of a sound toward a synchronized visual stimulus. It has been assumed to reflect early processes uninfluenced by cognitive factors such as syllable congruency between audio-visual speech stimuli. Conventional experiments have examined compelling situations which typically entail pairs of single audio and visual stimuli to be bound. However, for natural environments our multisensory system is designed to select relevant sensory signals to be bound among adjacent stimuli. This selection process may depend upon higher (cognitive) mechanisms. We investigated whether a cognitive factor affects the size of the ventriloquism when an additional visual stimulus is presented with a conventional audio-visual pair. Participants were presented with a set of audio-visual speech stimuli, comprising one or two bilateral movies of a person uttering single syllables together with recordings of this person speaking the same syllables. One of movies and the speech sound were combined in either congruent or incongruent ways. Participants had to identify sound locations. Results show that syllable congruency affected the size of the ventriloquism only when two movies were presented simultaneously. The selection of a relevant stimulus pair among two or more candidates can be regulated by some higher processes.


Psychologia | 2008

PROOFREADERS SHOW A GENERALIZED ABILITY TO ALLOCATE SPATIAL ATTENTION TO DETECT CHANGES

Michiko Asano; Shoko Kanaya; Kazuhiko Yokosawa


Brain and nerve | 2012

[Ventriloquism and audio-visual integration of voice and face].

Kazuhiko Yokosawa; Shoko Kanaya


Journal of Vision | 2013

Comparisons of temporal frequency limits for cross-attribute binding tasks in vision and audition

Shoko Kanaya; Waka Fujisaki; Shin'ya Nishida; Shigeto Furukawa; Kazuhiko Yokosawa

Collaboration


Dive into the Shoko Kanaya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Waka Fujisaki

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge