Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Waka Fujisaki is active.

Publication


Featured researches published by Waka Fujisaki.


Proceedings of the Royal Society of London B: Biological Sciences | 2010

A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities

Waka Fujisaki; Shin'ya Nishida

The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information (‘what’) across different channels, temporal coincidence (‘when’) along with spatial coincidence (‘where’) provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2–3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4–9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed ‘what’ and ‘when’ properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in ‘when’ processing, the binding performance reflects the central temporal limit of integrating ‘when’ and ‘what’ properties.


PLOS ONE | 2011

Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions

Warrick Roseboom; Shin'ya Nishida; Waka Fujisaki; Derek H. Arnold

Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.


Frontiers in Psychology | 2012

Effects of Delayed Visual Feedback on Grooved Pegboard Test Performance

Waka Fujisaki

Using four experiments, this study investigates what amount of delay brings about maximal impairment under delayed visual feedback and whether a critical interval, such as that in audition, also exists in vision. The first experiment measured the Grooved Pegboard test performance as a function of visual feedback delays from 120 to 2120 ms in 16 steps. Performance sharply decreased until about 490 ms, then more gradually until 2120 ms, suggesting that two mechanisms were operating under delayed visual feedback. Since delayed visual feedback differs from delayed auditory feedback in that the former induces not only temporal but also spatial displacements between motor and sensory feedback, this difference could also exist in the mechanism responsible for spatial displacement. The second experiment was hence conducted to provide simultaneous haptic feedback together with delayed visual feedback to inform correct spatial position. The disruption was significantly ameliorated when information about spatial position was provided from a haptic source. The sharp decrease in performance of up to approximately 300 ms was followed by an almost flat performance. This is similar to the critical interval found in audition. Accordingly, the mechanism that caused the sharp decrease in performance in experiments 1 and 2 was probably mainly responsible for temporal disparity and is common across different modality–motor combinations, while the other mechanism that caused a rather gradual decrease in performance in experiment 1 was mainly responsible for spatial displacement. In experiments 3 and 4, the reliability of spatial information from the haptic source was reduced by wearing a glove or using a tool. When the reliability of spatial information was reduced, the data lay between those of experiments 1 and 2, and that a gradual decrease in performance partially reappeared. These results further support the notion that two mechanisms operate under delayed visual feedback.


Journal of Vision | 2014

Audiovisual integration in the human perception of materials.

Waka Fujisaki; Naokazu Goda; Isamu Motoyoshi; Hidehiko Komatsu; Shin'ya Nishida

Interest in the perception of the material of objects has been growing. While material perception is a critical ability for animals to properly regulate behavioral interactions with surrounding objects (e.g., eating), little is known about its underlying processing. Vision and audition provide useful information for material perception; using only its visual appearance or impact sound, we can infer what an object is made from. However, what material is perceived when the visual appearance of one material is combined with the impact sound of another, and what are the rules that govern cross-modal integration of material information? We addressed these questions by asking 16 human participants to rate how likely it was that audiovisual stimuli (48 combinations of visual appearances of six materials and impact sounds of eight materials) along with visual-only stimuli and auditory-only stimuli fell into each of 13 material categories. The results indicated strong interactions between audiovisual material perceptions; for example, the appearance of glass paired with a pepper sound is perceived as transparent plastic. Rating material-category likelihoods follow a multiplicative integration rule in that the categories judged to be likely are consistent with both visual and auditory stimuli. On the other hand, rating-material properties, such as roughness and hardness, follow a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations for the two types of material judgment, respectively.


international conference on digital human modeling and applications in health, safety, ergonomics and risk management | 2016

Improving the Palatability of Nursing Care Food Using a Pseudo-chewing Sound Generated by an EMG Signal

Hiroshi Endo; Shuichi Ino; Waka Fujisaki

Elderly individuals whose eating functions have declined can only eat unpleasant foods with very soft textures. If more varied food textures could be delivered, the pleasure derived from eating could be improved. We tried to influence the perception of food texture using a pseudo-chewing sound. The sound was synchronized with mastication using the electromyogram (EMG) of the masseter. Coincidentally, when the EMG is heard as a sound, it is similar to the “crunchy” sound emitted by root vegetables. We investigated whether the perceived texture of nursing care food would change in subjects exposed to the EMG chewing sound. Elderly participants evaluated the textures of nursing care foods. When the EMG chewing sound was provided, they were more likely to evaluate a food as chewy. In addition, several scores related to the pleasure of eating were also increased. These results demonstrate the possibility of improving the palatability of texture-modified diets.


Appetite | 2017

Texture-dependent effects of pseudo-chewing sound on perceived food texture and evoked feelings in response to nursing care foods

Hiroshi Endo; Shuichi Ino; Waka Fujisaki

Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings.


Perception | 2016

Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood

Shoko Kanaya; Kenji Kariya; Waka Fujisaki

Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition–touch comparison, and for two of the three properties regarding in the vision–touch comparison. By contrast, no properties exhibited significant positive correlations in the vision–audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved.


Scientific Reports | 2018

Crossmodal association of auditory and visual material properties in infants

Yuta Ujiie; Wakayo Yamashita; Waka Fujisaki; So Kanazawa; Masami K. Yamaguchi

The human perceptual system enables us to extract visual properties of an object’s material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material (“Metal” and “Wood”) in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the “Metal” material later than for the “Wood” material, since infants form the visual property of “Metal” material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material’s familiarity might facilitate the development of multisensory processing during the first year of life.


Perception | 2015

Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task.

Shoko Kanaya; Waka Fujisaki; Shin'ya Nishida; Shigeto Furukawa; Kazuhiko Yokosawa

Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants.


Experimental Brain Research | 2009

Audio–tactile superiority over visuo–tactile and audio–visual combinations in the temporal resolution of synchrony perception

Waka Fujisaki; Shin'ya Nishida

Collaboration


Dive into the Waka Fujisaki's collaboration.

Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Endo

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuichi Ino

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

So Kanazawa

Japan Women's University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hidehiko Komatsu

Graduate University for Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge