Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shouichi Takane is active.

Publication


Featured researches published by Shouichi Takane.


Applied Acoustics | 2001

Control of auditory distance perception based on the auditory parallax model

Hae‐Young Kim; Yôiti Suzuki; Shouichi Takane; Toshio Sone

Abstract Simulation of the distance of a sound image within 2 m from a listener, in the absence of reflections and a loudness cue, was investigated. To do this, a model named the “auditory parallax model”, which focuses on the role of parallax angle information involved in head-related transfer functions (HRTFs), was examined with psychoacoustical experiments. For purposes of comparison, experiments were also done on an actual sound source, sound with digitally synthesized HRTFs, and sound with the interaural time difference (ITD) and interaural level difference (ILD) synthesized by the Hirsch–Tahara model. The perceived distance of a sound image by the actual sound source monotonically increased with the physical distance of the source up to 1–1.5 m without any cues of sound pressure level and reflections from walls. The perceived distance of a sound image simulated with the auditory parallax model and that with synthesized HRTFs showed tendencies very similar to those with the actual sound source. On the other hand, for a sound image produced by the Hirsch–Tahara model, the perceived distance of the sound image increased up to around 40 cm and then became saturated. These results show that simple synthesis of the ITD and ILD as a function of distance is insufficient to explain auditory distance perception. However, the results of the experiment in which HRTFs were simulated based on the auditory parallax model showed that the cues provided by the new model were almost sufficient to control the perception of auditory distance from an actual sound source located within about 2 m. Possible reasons for the good performance of the auditory parallax model are the resemblance between the relative frequency characteristics (shape) as well as the shape of ILD as a function of frequency simulated by the auditory parallax model and those of the actual HRTFs.


Journal of the Acoustical Society of America | 1998

A modeling of distance perception based on an auditory parallax model

Yôiti Suzuki; Shouichi Takane; Hae‐Young Kim; Toshio Sone

Distance perception of sound from a source (<5 m) situated close to a listener was studied under the state without information on intensity and reflections using a small movable loudspeaker in an anechoic room. Distance perception in such a condition is modeled by use of the difference in HRTF (head‐related transfer function) due to the parallax angle calculated from the difference between directions from each ear to the source. This model is called the ‘‘auditory parallax model’’ here. Sound signals to realize this model were digitally synthesized for the listening experiment. In addition, an experiment with an actual sound source and one with HRTF synthesized as precisely as possible were also conducted. The distances obtained for sound with the auditory parallax model and that with synthesized HRTF showed very similar tendencies to those with actual sound sources. That is, in all the three conditions, the perceived distance of a sound image monotonically increased with the physical distance of source f...


Journal of the Acoustical Society of America | 2013

Development and performance evaluation of virtual auditory display system to synthesize sound from multiple sound sources using graphics processing unit

Kanji Watanabe; Yusuke Oikawa; Sojun Sato; Shouichi Takane; Koji Abe

Head-related transfer function (HRTF) is characterized as sound transmission from sound source to listeners eardrum. When a listener hears a sound that is filtered with the HRTFs, the listener can localize a virtual target (sound image) as if the sound had come from the position corresponding to that at which the HRTFs were measured. A moving sound image can be generated to switch HRTFs of successive direction in real-time. While many virtual auditory displays (VADs) based on synthesis of HRTFs have been proposed, most of them can synthesize only a few sound images due to lack of computation power. In this article, the VAD system implemented based on graphics processing unit (GPU) was introduced. In our system, the convolution of HRTFs is parallelized on GPU to realize a high-speed processing. In addition, the multiple HRTFs each of which is corresponding to sound sources at different position are processed in parallel to control multiple sound image simultaneously. In this article, the performance of ou...


Journal of the Acoustical Society of America | 2006

Objective and subjective evaluation of numerically estimated head‐related transfer functions

Shouichi Takane; Koji Abe

In this study, head‐related transfer functions (HRTFs) of a dummy head and human subjects are numerically estimated by using the boundary element method (BEM), and their availability for auralization is discussed from the subjective viewpoints. Geometries of their heads are obtained by a 3‐D laser scanner. The effect of the torso for the estimation of the HRTFs is also examined and discussed via the 3‐D measurement. The estimated HRTFs are compared for each subject with the measured HRTFs in various objective criteria, such as spectral distortion (SD), signal‐to‐distortion ratio (SDR), and interaural time/level differences (ITD/ILD). Availability of the estimated HRTFs for the synthesis of the 3‐D sound image are then evaluated via hearing experiments. In our previous study [Proceedings of WESPAC IX (2006)], the results of the experiment is insufficient since the low‐frequency range of the estimated HRTFs caused the ambiguity in sound localization. In this study, the frequency range of the estimation is e...


Journal of the Acoustical Society of America | 2005

Adjustment of interaural time difference in head related transfer functions based on listeners’ anthropometry and its effect on sound localization

Yôiti Suzuki; Kanji Watanabe; Yukio Iwaya; Jiro Gyoba; Shouichi Takane

Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non‐individualized HRTFs to a specific listener based on that listener’s anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener’s anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multipl...


Journal of the Acoustical Society of America | 2016

Effect of individualization of interaural time/level differences with non-individualized head-related transfer functions on sound localization

Kanji Watanabe; Masayuki Nishiguchi; Shouichi Takane; Koji Abe

Head-related transfer functions (HRTFs) are known to include comprehensive auditory cues for sound source positions and show large inter-subject variation. Therefore, individualization of HRTFs is important for highly accurate sound localization systems such as virtual auditory displays. In the present study, we assume that the interaural time deffierence (ITD) and the interaural level difference (ILD) can be estimated from the listener’s anthropometric parameters. The individualization of the HRTFs is achieved by replacing the ITDs and ILDs of the non-individualized HRTFs with the listener’s ones. In this report, the effect of the individualization was evaluated by listening experiment. The non-individual HRTFs were obtained by interpolating magnitude response of the listener’s own HRTF and the flat ones. The ratios of the interpolation corresponded to the degrees of individuality. Then, the ITDs and ILDs were added to individualize HRTFs. From the results, the effect of the degree of individuality was i...


Journal of the Acoustical Society of America | 2016

Influence of sound source arrangement on the precedence effect

Koji Abe; Shouichi Takane; Masayuki Nishiguchi; Kanji Watanabe

As such robot audition, problem of estimating the direction of the sound image from the physical sound signal has high necessity. In order to develop an estimation system of the direction of sound image corresponding to the multiple sound sources, it is necessary to know the behavior of the precedence effect in comprehensive sound source arrangement. So, we investigated the influence of the sound source arrangement on the precedence effect experimentally. The minimum angle between the directions of two sound sources is 30 degrees. Experimental method is a three Alternative Forced Choice Task. The three choices are in the direction of two sound sources and its intermediate. In the case of a bilaterally symmetrical arrangement of sound sources, our experimental results were in good agreement with the findings obtained in previous studies. However, our experimental results show that deviation of the localized direction of sound image is dependent on the sound image arrangement. Particularly, in the case of l...


Journal of the Acoustical Society of America | 2016

Some investigations on properties of spatial principal components analysis of individual head-related transfer functions

Shouichi Takane

Analysis for compact representation of spatial change of Head-Related Transfer Functions (HRTFs) based on principal components analysis (PCA), called Spatial PCA (SPCA), is investigated in this report. Although the SPCA of HRTFs has been researched for about 30 years, there exist some questions left unclear. In this report, the author tries to answer the following two questions: (1) how much amount of data (number of subjects and/or directions) is enough for generation of principal components, and (2) which domain (impulse response, frequency spectrum, etc.) is the best for compact representation of individual HRTFs based on the SPCA. As for (1), the amount of data may be relatively small as a result of the preliminary investigation, meaning that the HRTFs of a certain subject can be reconstructed by using the SPCA with the HRTFs of the other subjects of relatively small numbers. As for (2), the author concluded that the complex frequency spectrum brings about the most compact representation as the result...


Journal of the Acoustical Society of America | 2013

Comparison of precedence effect behavior in anechoic chamber with that in ordinary room

Koji Abe; Shouichi Takane; Sojun Sato; Kanji Watanabe

The precedence effect is well known as one of auditory illusions occurred by using multiple sound sources with similar sound output. When a sound is followed by similar sound separated with relatively short time delay, a single fused sound image is localized at the source position corresponding to the first-arriving sound. This feature is applicable to public address systems, which make audience perceive the sound image different from the actual sound source positions prepared for the system, with some sound reinforcement achieved. In spite of many studies in this phenomenon, the behavior of the precedence effect has been investigated for limited sound source arrangements in laboratory environments like anechoic chamber. On the other hand, this behavior in the ordinary room is not obvious, and it is effective to clarify the difference of the behavior of the precedence effect in anechoic chamber from that in the ordinary room for the application of the precedence effect to the public address system. In thi...


I-perception | 2011

A Fundamental Study on Influence of Concurrently Presented Visual Stimulus Upon Loudness Perception

Koji Abe; Shota Tsujimura; Shouichi Takane; Kanji Watanabe; Sojun Sato

As a basic study on the influence of the dynamic properties of the audio-visual stimuli upon interaction between audition and vision, the effect of the simple movement involved in the visual stimulus on the loudness perception of the audio stimulus was investigated via psychophysical experiment. In this experiment, the visual stimulus given to subjects along with the audio stimulus is a bar appeared on a display, one side of which is flexibly expanding and contracting. The loudness of the audio stimulus with such a visual effect concurrently presented was rated as an absolute numerical value by using the Magnitude Estimation method. The reference of the bar length is determined so as to correspond to the Zwickers loudness calculated for the given audio stimulus. As a result, the visual stimulus did not affect the loudness perception, when the bar was presented with its length same as the reference. On the other hand, the rating of the loudness for the same audio stimulus was significantly increased when the bar length was longer than the reference. This indicates that the change in the correspondence between the audio and the visual stimuli affect the loudness perception.

Collaboration


Dive into the Shouichi Takane's collaboration.

Top Co-Authors

Avatar

Kanji Watanabe

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Koji Abe

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sojun Sato

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryosuke Kodama

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Toshio Harima

Tohoku Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge