Jyri Huopaniemi
Nokia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jyri Huopaniemi.
international conference on acoustics, speech, and signal processing | 2002
Vesa T. Peltonen; Juha T. Tuomi; Anssi Klapuri; Jyri Huopaniemi; Timo Sorsa
In this paper, we address the problem of computational auditory scene recognition and describe methods to classify auditory scenes into predefined classes. By auditory scene recognition we mean recognition of an environment using audio information only. The auditory scenes comprised tens of everyday outside and inside environments, such as streets, restaurants, offices, family homes, and cars. Two completely different but almost equally effective classification systems were used: band-energy ratio features with 1-NN classifier and Mel-frequency cepstral coefficients with Gaussian mixture models. The best obtained recognition rate for 17 different scenes out of 26 and for an analysis duration of 30 seconds was 68.4%. For comparison, the recognition accuracy of humans was 70% for 25 different scenes and the average response time was around 20 seconds. The efficiency of different acoustic features and the effect of test sequence length were studied.
IEEE Transactions on Multimedia | 2004
Riitta Väänänen; Jyri Huopaniemi
We present the virtual acoustics modeling framework that is a part of the MPEG-4 standard. A scene description language called the binary format for scenes (BIFS) is defined within MPEG-4 for making multimedia presentations that include various types of audio and visual data. BIFS also provides means for creating three-dimensional (3-D) virtual worlds or scenes, where visual and sound objects can be positioned and given temporal behavior. Local interaction between the user and the scene can be added to MPEG-4 applications. Typically the user can navigate in a 3-D scene so that it is viewed from different positions. In case that there are sound source objects in a scene, the sounds may be spatialized so that they are heard coming from the positions defined for them. A subset of BIFS, called the advanced AudioBIFS aims at enhanced modeling of 3-D sound environments. In this framework, sounds can be given positions, and also the virtual environment where they appear can be associated with acoustic properties that allow modeling of phenomena such as air absorption, Doppler effect, sound reflections, and reverberation. These features can be used for adding room acoustic effects to sound in the MPEG-4 terminal, and for creating immersive 3-D audiovisual scenes.
Journal of the Acoustical Society of America | 2003
Jyri Huopaniemi; Timo Sorsa; Peter Boda
Archive | 2003
Ole Kirkeby; Jyri Huopaniemi; Timo Sorsa
Journal of the Acoustical Society of America | 2002
Jyri Huopaniemi
international symposium/conference on music information retrieval | 2001
Timo Sorsa; Jyri Huopaniemi
Archive | 2002
Lauri Savioja; Tapio Lokki; Jyri Huopaniemi
Archive | 1999
Jyri Huopaniemi; Riitta Väänänen
Archive | 1998
Eric D. Scheirer; Riitta Väänänen; Jyri Huopaniemi
Journal of The Audio Engineering Society | 1999
Nick Zacharov; Jyri Huopaniemi