Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenji Ozawa is active.

Publication


Featured researches published by Kenji Ozawa.


Journal of the Acoustical Society of America | 2007

Estimation of interaural level difference based on anthropometry and its effect on sound localization

Kanji Watanabe; Kenji Ozawa; Yukio Iwaya; Yôiti Suzuki; Kenji Aso

Individualization of head-related transfer functions (HRTFs) is important for highly accurate sound localization systems such as virtual auditory displays. A method to estimate interaural level differences (ILDs) from a listeners anthropometry is presented in this paper to avoid the burden of direct measurement of HRTFs. The main result presented in this paper is that localization is improved with nonindividualized HRTF if ILD is fitted to the listener. First, the relationship between ILDs and the anthropometric parameters was analyzed using multiple regression analysis. The azimuthal variation of the ILDs in each 1/3-octave band was then estimated from the listeners anthropometric parameters. A psychoacoustical experiment was carried out to evaluate its effectiveness. The experimental results show that the adjustment of the frequency characteristics of ILDs for a listener with the proposed method is effective for localization accuracy.


systems, man and cybernetics | 2011

Development of Kansei estimation models for the sense of presence in audio-visual content

Yuichiro Kinoshita; Kazutomo Fukue; Kenji Ozawa

The term ‘presence’ is widely used to express the performances of audio-visual content and systems. Until now, many advanced audio-visual reproduction systems have been proposed to enable the perception of a higher degree of presence. Methodologies for evaluating the sense of presence derived from these systems also play an important role in the further evolution of audio-visual content and systems. This paper describes the construction of Kansei models which quantitatively evaluate the degree of presence in audio-visual content. Subjective evaluation experiments were conducted to investigate the relationship between the features of audio-visual content and the perceived sense of presence using 40 different types of audio-visual content. On the basis of evaluation scores and extracted audio-visual features, models were separately constructed for audio, visual and audio-visual content using neural networks. The performance tests of the constructed models demonstrated sufficient accuracy in estimating the sense of presence and statistically significant correlations between the model output and the subjective evaluation scores.


international symposium on universal communication | 2008

Content Presence vs. System Presence in Audio Reproduction Systems

Kenji Ozawa; Yoshihiro Chujo

The auditory presence of a reproduced sound depends on its content and the characteristics of the system used. In this study, the former property is referred to as ¿content presence¿, while the latter is called ¿system presence¿. A psychoacoustical experiment was conducted to measure the presence of twenty-five stimuli, which consisted of five reproduction systems with five sounds. The five systems differed in their accuracy of sound localization, and included a binaural reproduction system and a monaural system. The five sounds were chosen due to their different content-presence based on our previous experiment. Herein the experiment was conducted using the method of Scheffes paired comparison. The results showed that for a high presence, the accuracy of sound localization is important. Moreover, it is found that the system presence is comparable to the content presence in audio reproduction systems.


Journal of the Acoustical Society of America | 1993

Monaural phase effects on timbre of two‐tone signals

Kenji Ozawa; Yôiti Suzuki; Toshio Sone

The effects of the phase difference between components of a two‐tone signal whose components have a frequency ratio of f2/f1=2.0 on its timbre were investigated. In the experiments, the similarities among six signals were judged with the complete method of triads. The subjective space in which the configuration of points with the different phase conditions lay was derived from the data by means of Torgerson’s multidimensional scaling program. An early study stated that the subjective space was a circle. Present results, however, revealed that the subjective space becomes one‐dimensional when either the amplitude ratio between components was large or when the frequency of the signal was high. This one‐dimensional configuration is interpreted in terms of the partially masked loudness of the higher frequency component of the signal.


ieee global conference on consumer electronics | 2014

Neural network-based microphone array learning of temporal-spatial patterns of input signals

Akihiro Iseki; Kenji Ozawa; Yuichiro Kinoshita

A sharp directional microphone array system was previously developed using a neural network. However, the system cannot distinguish two signals with different frequencies because it learns only the spatial pattern of the sound pressure distribution of the input signals. To overcome this problem, herein we propose a system that learns the temporal-spatial pattern of the input signals. The proposed system successfully obtains a wide-band super-directivity.


software engineering, artificial intelligence, networking and parallel/distributed computing | 2012

Kansei Estimation Models for the Sense of Presence in Audio-Visual Content with Different Audio Reproduction Methods

Kenji Ozawa; Masashi Obinata; Yuichiro Kinoshita

The performances of both audio-visual content and systems are often evaluated by the sense of presence, which can be divided into two aspects: content presence and system presence. In our previous study, we constructed neural-network-based Kansei estimation models to evaluate content presence. Herein we aim to incorporate system presence into the Kansei models. We initially examined five audio reproduction methods, which simulate different systems, and conducted subjective evaluation experiments using 12 audio-visual content items. The experiments indicate the audio reproduction method influences both the audio-only and audio-visual conditions, but the effect is larger in the audio-only condition. Thus, we introduced four features related to the spatial impression of sound as new inputs into the previous Kansei estimation models. The expanded models successfully estimated both the content presence and system presence regardless of the condition. Hence, these models can quantitatively estimate the sense of presence in both audio-visual content and systems.


Scandinavian Audiology | 2000

Apparent change of masking functions with compression-type digital hearing aid.

Naoko Sasaki; Tetsuaki Kawase; Hiroshi Hidaka; Masaki Ogura; Tomonori Takasaka; Kenji Ozawa; Yoˆiti Suzuki; Toshio Sone

Signal perception ability under conditions of a narrow band masker in subjects with hearing aids was examined using a theoretical model of the auditory nerve fibre (ANF) with a deteriorated tuning curve in addition to measurements of actual masking function in subjects wearing hearing aids. The results obtained indicate that the apparent masking function could be affected by the frequency-gain character as well as by the degree of compression. Usually, the compression-type of amplification with flat and/or high-frequency weighted characteristics improves not only the apparent thresholds but also the apparent masked thresholds under conditions of lower frequency masking. On the other hand, a low-frequency masker amplified by a higher gain with low-frequency weighted amplification could cause larger upward-masking effects on the signal perception of the higher frequency signal in some conditions. The present study may contribute to our understanding of the underlying mechanisms of the effects of different amplification by the aid.


Scandinavian Audiology | 1998

Clinical evaluation of a portable digital hearing aid with narrow-band loudness compensation.

Hiroshi Hidaka; Tetsuaki Kawase; Shin Takahashi; Yôiti Suzuki; Kenji Ozawa; Syuichi Sakamoto; Naoko Sasaki; Koji Hirano; Narihisa Ueda; Toshio Sone; Tomonori Takasaka

A new portable digital hearing aid referred to as CLAIDHA (Compensate for Loudness by Analyzing Input-signal Digital Hearing Aid), which employs frequency-dependent amplitude compression based on narrow-band loudness compensation, was clinically evaluated in 159 subjects with hearing loss. The results of speech tests revealed better intelligibility compared with the subjects own hearing aids; the advantage of using CLAIDHA in daily life was also indicated by the results of a questionnaire completed by the subjects. In about 64% of the subjects with a flat, gradually sloping type of hearing loss, CLAIDHA was satisfactorily adopted for daily use. However, in the subjects with a steeply sloping type of hearing loss and subjects with losses mainly at high and low frequencies, with near-normal mid-frequency hearing. this loudness compensation scheme seems to be slightly less effective.


international conference on networking, sensing and control | 2005

Compensation methods of sound quality for a car-audio equalizer

Kenji Ozawa; Takafimi Tomita; Akihiro Shiba; Tomohiko Ise; YBiti Suzuki

When listening to music in a vehicle, its sound quality is disrupted by noise of the vehicle. In order to find the best method to compensate for the deterioration in sound quality, psychoacoustical experiments were conducted to evaluate the following four methods for designing the frequency characteristics of a car audio equalizer: simple amplification, non-linear amplification, narrow-band loudness compensation, and masked frequency spectrum compensation. As a result, narrow-band loudness compensation method was regarded as the best method.


International Journal of Audiology | 1999

Sound Localization for a Virtual Sound Source in Cases of Chronic Otitis Media

Tetsuaki Kawase; Tetsuo Koiwa; Ryo Yuasa; Yu Yuasa; Hiroshi Hidaka; Tomonori Takasaka; Kenji Ozawa; Yôiti Suzuki; Toshio Sone

Sound localization in subjects with chronic otitis media (COM) was examined before and soon after ear surgery by means of virtual sound presented by headphones, sound being synthesized based on the head-related transfer functions (HRTFs) in a normal subject. The localization ability in COM patients was usually worse than that in normal subjects, but was better than expected when compared with cases of acute loss. On the other hand, the effects of hearing improvement on localization ability observed in COM patients were smaller than those of simulated acute hearing loss achieved by earplugs in normal subjects. This seems to suggest that the localization cues in patients with chronic hearing loss are different from those under normal conditions.

Collaboration


Dive into the Kenji Ozawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Sato

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge