Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Riitta Väänänen is active.

Publication


Featured researches published by Riitta Väänänen.


IEEE Transactions on Multimedia | 1999

AudioBIFS: Describing audio scenes with the MPEG-4 multimedia standard

Eric D. Scheirer; Riitta Väänänen; Jyri Huopaniemi

We present an overview of the AudioBIFS system, part of the Binary Format for Scene Description (BIFS) tool in the MPEG-4 International Standard. AudioBIFS is the tool that integrates the synthetic and natural sound coding functions in MPEG-4. It allows the flexible construction of soundtracks and sound scenes using compressed sound, sound synthesis, streaming audio, interactive and terminal-dependent presentation, three-dimensional (3-D) spatialization, environmental auralization, and dynamic download of custom signal-processing effects algorithms. MPEG-4 sound scenes are based on a model that is a superset of the model in VRML 2.0, and we describe how MPEG-4 is built upon VRML and the new capabilities provided by MPEG-4. We discuss the use of structured audio orchestra language, the MPEG-4 SAOL, for writing downloadable effects, present an example sound scene built with AudioBIFS, and describe the current state of implementations of the standard.


IEEE Transactions on Multimedia | 2004

Advanced AudioBIFS: virtual acoustics modeling in MPEG-4 scene description

Riitta Väänänen; Jyri Huopaniemi

We present the virtual acoustics modeling framework that is a part of the MPEG-4 standard. A scene description language called the binary format for scenes (BIFS) is defined within MPEG-4 for making multimedia presentations that include various types of audio and visual data. BIFS also provides means for creating three-dimensional (3-D) virtual worlds or scenes, where visual and sound objects can be positioned and given temporal behavior. Local interaction between the user and the scene can be added to MPEG-4 applications. Typically the user can navigate in a 3-D scene so that it is viewed from different positions. In case that there are sound source objects in a scene, the sounds may be spatialized so that they are heard coming from the positions defined for them. A subset of BIFS, called the advanced AudioBIFS aims at enhanced modeling of 3-D sound environments. In this framework, sounds can be given positions, and also the virtual environment where they appear can be associated with acoustic properties that allow modeling of phenomena such as air absorption, Doppler effect, sound reflections, and reverberation. These features can be used for adding room acoustic effects to sound in the MPEG-4 terminal, and for creating immersive 3-D audiovisual scenes.


ubiquitous computing | 2016

Guided by music: pedestrian and cyclist navigation with route and beacon guidance

Robert Albrecht; Riitta Väänänen; Tapio Lokki

Music listening and navigation are both common tasks for mobile device users. In this study, we integrated music listening with a navigation service, allowing users to follow the perceived direction of the music to reach their destination. This navigation interface provided users with two different guidance methods: route guidance and beacon guidance. The user experience of the navigation service was evaluated with pedestrians in a city center and with cyclists in a suburban area. The results show that spatialized music can be used to guide pedestrians and cyclists toward a destination without any prior training, offering a pleasant navigation experience. Both route and beacon guidance were deemed good alternatives, but the preference between them varied from person to person and depended on the situation. Beacon guidance was generally considered to be suitable for familiar surroundings, while route guidance was seen as a better alternative for areas that are unfamiliar or more difficult to navigate.


Journal of the Acoustical Society of America | 1999

Virtual concerts in virtual spaces—in real time

Tapio Lokki; Lauri Savioja; Jarmo Hiipakka; Rami Hanninen; Ville Pulkki; Riitta Väänänen; Jyri Huopaniemi; Tommi Ilmonen; Tapio Takala

The DIVA system is an experimental interactive real‐time virtual environment with synchronized sound and animation components. The system provides real‐time automatic character animation and visualization, dynamic behavior control of virtual actors, interaction through motion analysis, sound generation with physical models of musical instruments, and three‐dimensional sound auralization. The combined effect of 3‐D visual and acoustic elements creates stronger immersion than would be possible with either alone. As a demonstration, a virtual band with four artificial musicians has been implemented. The user interacts with the virtual musicians by showing the tempo with a baton, like real conductors do. The animated band follows the gestures of the conductor and another user controls the viewpoint of the audience. Due to the real‐time acoustic modeling and sound rendering, both users hear the auralized music in a real concert hall.


european signal processing conference | 2017

Real-time adaptive equalization for headphone listening

Juho Liski; Vesa Välimäki; Sampo Vesa; Riitta Väänänen

The experienced sound quality produced by headphones varies between individuals. Especially with insert headphones a constant equalization may not work properly, when a specific perceived frequency response is desired. Instead, adaptive individualized equalization can be used. Previously this required multiple sensors in a headphone earpiece. This paper proposes a signal processing algorithm for continuous on-line equalization of a headset with a single microphone. The magnitude response of the headphones is estimated using arbitrary reproduced sounds. Then, the headphone response is equalized to a user-selected target response with a graphical equalizer. Measurements show that the proposed algorithm produces accurate estimates with different sound materials and the equalization produces results that closely match the target response. The algorithm can be implemented for multiple applications to obtain accurate and quick personalization, since the target response can be set arbitrarily.


international computer music conference | 1997

Efficient and parametric reverberator for room acoustics modeling

Jyri Huopaniemi; Vesa Välimäki; Matti Karjalainen; Riitta Väänänen


Journal of The Audio Engineering Society | 2003

User Interaction and Authoring of 3D Sound Scenes in the Carrouso EU project

Riitta Väänänen


Archive | 2006

Equalization based on digital signal processing in downsampled domains

Riitta Väänänen; Jarmo Hiipakka


Archive | 1999

Method and system for processing directed sound in an acoustic virtual environment

Jyri Huopaniemi; Riitta Väänänen


Archive | 2003

Parametrization, auralization, and authoring of room acoustics for virtual reality applications

Riitta Väänänen

Collaboration


Dive into the Riitta Väänänen's collaboration.

Top Co-Authors

Avatar

Jyri Huopaniemi

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge