Jarmo Hiipakka
Nokia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jarmo Hiipakka.
Proceedings of SPIE | 2001
Jarmo Hiipakka; Tommi Ilmonen; Tapio Lokki; Matti Groehn; Lauri Savioja
This paper presents the audio system built for the virtual room at Helsinki University of Technology. First we discuss the general problems for multichannel sound reproduction caused by the construction of, and the equipment in virtual rooms. We also describe the acoustics of the room in question, and the effect of the back-projected screens and reverberation to the sound. Compensation of the spectral deficiencies and the problems with the large listening area and high frequency attenuation are introduced. The hardware configuration used for sound reproduction is shortly described. We also report the software applications and libraries built for sound signal processing and 3D sound reproduction.
human-computer interaction with mobile devices and services | 2002
Gaetan Lorho; Jarmo Hiipakka; Juha Marila
This paper describes a technique to support user interaction in a hierarchical menu, based on spatial sound separation. A complex menu structure is represented in space using a limited number of sound positions obtained by stereo panning or 3-D audio processing techniques. Spatial organisation of menu items can be designed in a logical way to provide navigation cues to the user, independent of the menu item nature. Two different strategies for menu presentation and interaction are described and compared in this paper. Finally, an application of this technique to the navigation in a large music collection is considered. This case study is an interesting example of usage situation for which eyes-free interaction would be useful, for instance on a portable audio player using headphones and a small remote control.
Organised Sound | 1998
Tapio Lokki; Jarmo Hiipakka; Rami Hanninen; Tommi Ilmonen; Lauri Savioja; Tapio Takala
Visual rendering is the process of creating synthetic images of digital models. The modelling of sound synthesis and propagation in a virtual space is called sound rendering. In this article we review different audiovisual rendering techniques suitable for realtime rendering of three-dimensional virtual worlds. Virtual environments are useful in various application areas, for example in architectural visualisation. With audiovisual rendering, lighting and acoustics of a modelled concert hall can be experienced early in the design stage of the building. In this article we demonstrate an interactive audiovisual rendering system where an animated virtual orchestra plays in a modelled concert hall. Virtual musicians are conducted by a real conductor who wears a wired data dress suit and a baton. The conductor and the audience hear the music rendered according to the acoustics of the virtual concert hall, creating a lifelike experience.
Journal of New Music Research | 2001
Matti Karjalainen; Tero Tolonen; Vesa Välimäki; Cumhur Erkut; Mikael Laurson; Jarmo Hiipakka
Physical modeling and model-based sound synthesis have recently been among the most active topics of computer music and audio research. In the modeling approach one typically tries to simulate and duplicate the most prominent sound generation properties of the acoustic musical instrument under study. If desired, the models developed may then be modified in order to create sounds that are not common or even possible from physically realizable instruments. In addition to physically related principles it is possible to combine physical models with other synthesis and signal processing methods to realize hybrid modeling techniques. This article gives an overview of some recent results in model-based sound synthesis and related signal processing techniques. The focus is on modeling and synthesizing plucked string sounds, although the techniques may find much more widespread application. First, as a background, an advanced linear model of the acoustic guitar is discussed along with model control principles. Then the methodology to include inherent nonlinearities and time-varying features is introduced. Examples of nonlinearities are studied in the context of two string instruments, the kantele and the tanbur, which exhibit interesting nonlinear effects.
Journal of the Acoustical Society of America | 1999
Tapio Lokki; Lauri Savioja; Jarmo Hiipakka; Rami Hanninen; Ville Pulkki; Riitta Väänänen; Jyri Huopaniemi; Tommi Ilmonen; Tapio Takala
The DIVA system is an experimental interactive real‐time virtual environment with synchronized sound and animation components. The system provides real‐time automatic character animation and visualization, dynamic behavior control of virtual actors, interaction through motion analysis, sound generation with physical models of musical instruments, and three‐dimensional sound auralization. The combined effect of 3‐D visual and acoustic elements creates stronger immersion than would be possible with either alone. As a demonstration, a virtual band with four artificial musicians has been implemented. The user interacts with the virtual musicians by showing the tempo with a baton, like real conductors do. The animated band follows the gestures of the conductor and another user controls the viewpoint of the audience. Due to the real‐time acoustic modeling and sound rendering, both users hear the auralized music in a real concert hall.
Archive | 2001
Jarmo Hiipakka
Journal of The Audio Engineering Society | 2004
Aki Härmä; Julia Jakka; Miikka Tikander; Matti Karjalainen; Tapio Lokki; Jarmo Hiipakka; Gaetan Lorho
Archive | 2008
Jussi Virolainen; Jarmo Hiipakka
Archive | 2005
Ole Kirkeby; Jarmo Hiipakka
Archive | 2006
Jarmo Hiipakka