Mikael Laurson
Sibelius Academy
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mikael Laurson.
Computer Music Journal | 1999
Gérard Assayag; Camilo Rueda; Mikael Laurson; Carlos Agon; Olivier Delerue
In recent years, IRCAM has been exploring and developing software for computer-assisted composition (CAC). These software packages allow composers and musicologists to formalize and experiment with the structures and dynamics of musical languages. The Formes program (Rodet and Cointe 1984), although primarily devoted to the control of sound synthesis, was really a compositional environment with a high-level object-oriented architecture. The Crime environment (Assayag, Castellengo, and Malherbe 1985; Amiot, Assayag, Malherbe, and Riotte 1986) was the first attempt at IRCAM to realize a general CAC environment where the user could define and control abstract musical formalisms. Francis Courtot developed CARLA as an attempt to use a visual programming interface to a Prolog-based logic-programming system (Balaban, Ebcioglu, and Laske 1992). The development of the PatchWork environment, by M. Laurson, J. Duthen, and C. Rueda (Laurson and Duthen 1989; Laurson 1996), was the next stage in the development of CAC programs at IRCAM. The combination of programming simplicity and a highly visual interface in a personal computing concept created an infatuation with PatchWork among European composers with highly diverse musical and aesthetic backgrounds, including Antoine Bonnet, Michel Fano, Brian Ferneyhough, Gerard Grisey, Paavo Heininen, Magnus Lindberg, Claudy Malherbe, Tristan Murail, Kaija Saariaho, and many others. OpenMusic, designed by G. Assayag and C. Agon (Assayag, Agon, Fineberg, and Hanappe 1997; Agon, Assayag, Delerue, and Rueda 1998), is the most recent IRCAM CAC environment. It is a visual interface to CLOS, the Common Lisp Object System (Steele 1990). Aside from being a superset of PatchWork, it opens new territories by allowing the composer to visually design sophisticated musical object classes. It introduces the maquette concept, which enables high-level control of musical material over time, and it revises the PatchWork visual language in a modern way.
Computer Music Journal | 2001
Mikael Laurson; Cumhur Erkut; Vesa Välimäki; Mika Kuuskankare
Computer Music Journal Computer Music Journal, 25:3, pp. 38–49, Fall 2001
Computer Music Journal | 2009
Mikael Laurson; Mika Kuuskankare; Vesa Norilo
This article provides an overview of our free Lispbased, cross-platform, visual programming environment called PWGL (the name is an acronym for PatchWork Graphical Language). PWGL can be used for computer-aided composition, music analysis, and sound synthesis. Our work is influenced by our past experience with PatchWork (Laurson 1996; Assayag et al. 1999). However, PWGL has been completely rewritten and redesigned in our attempt to develop and improve many of the useful concepts behind PatchWork. PWGL and several of its applications have already been discussed in our previous work, including Laurson and Kuuskankare (2002, 2006), Kuuskankare and Laurson (2006), and Laurson, Norilo, and Kuuskankare (2005). Since 2002, PWGL has undergone several revisions and additions including the public release, launching of the PWGL home page, inclusion of the tutorials and new programming interfaces, etc.; each of these have been attempts to make our system more available and more easily approachable within the computer music community. The main purpose of the present article is to give a general overview of the current state of the system with an emphasis on features that are useful for potential PWGL users, such as composers, music analysts, and researchers. PWGL is programmed ANSI Common Lisp (Steele 1990) from LispWorks (www.lispworks.com), which is source-code compatible across several different operating systems, such as Macintosh OS X, Microsoft Windows, and Linux. LispWorks provides support for Foreign Language Interfaces, by which a program written in one programming language can call routines or make use of services written in another. LispWorks also supports the OpenGL graphics library API. (The graphics part
Computer Music Journal | 2006
Mika Kuuskankare; Mikael Laurson
At present, there are a large number of commercial and free programs dealing with music notation. Two of the most notable free software projects are LilyPond (Nienhuys and Nieuwenhuizen 2003) and Guido (Hoos et al. 1998; Renz 2002). Both of these programs are LaTeX-like languages that are used to describe the contents of a musical score in a textual form and not through a point-and-click user interface. Furthermore, a Web site dedicated to music notation software, ace.acadiau.ca/score/others.htm, lists a large number of other commercial and free music notation programs including simple notation editors (e.g., abc) for typesetting relatively simple notation; special-purpose editors like Gr6goire for typesetting Gregorian chant, Django for typesetting tabulature, and GOODFEEL for Braille notation; and full-featured programs like Berlioz, Igor (programmed in Lisp), Nightingale, and SCORE. Finally, the two de facto commercial notation programs are Finale and Sibelius. Recently, Web-viewable applications have also started to emerge. There are a few commercial approaches such as Scorch by Sibelius Software. ScoreSVG (Bays 2005), in turn, is a-free alternative based on the Scalable Vector Graphics, a language for describing two-dimensional vector graphics in XML. A handful of Lisp-based applications also exist that are aimed at representing musical data, such as Common Music Notation (CMN; Schottstaedt 1997), the Rhythm-Editor of PatchWork (RTM; Laurson 1996), and the musical editors in OpenMusic (Assayag et al. 1999). One of the earliest experiments, MUZACS (Kornfeld 1980), was even written for a Lisp machine. Of these editors, RTM is primarily intended to represent musical raw material, and thus the editing capabilities are limited. This is also the case with Open Musics notation editors. CMN is powerful typesetting package but lacks a graphical user interface (GUI). Expressive Notation Package (ENP; Kuuskankare and Laurson 2002) is a music notation program that has been developed to meet the requirements of computer-aided composition, music analysis, and synthesis control. ENP is programmed with LispWorks Common Lisp (www.lispworks.com). LispWorks, in turn, is a Lisp implementation that is source-code compatible across Windows, Linux, Mac OS X, and some UNIX platforms. ENP is primarily intended to represent Western musical notation roughly from the 17th century onward with a strong emphasis on modern notation. ENP is not a full-featured music typesetting program. However, it is designed to produce automatic, reasonable musical typesetting according to the common practices described, for example, in Stone (1980) and Read (1982). ENP is used as a notational front end in a visual programming language called PWGL (Laurson and Kuuskankare 2002). PWGL is a combination of several software packages build on top of Common Lisp. In addition to ENP, the components include, for example, a rule-based programming language called PWGLConstraints (Laurson 1996), and a sound-synthesis program called PWGLSynth (Laurson et al. 2005). The development of PWGL is focused toward the Mac OS X operating system, but it currently runs also under Windows XP. A Linux version is under consideration. PWGL is freeware, and it can be downloaded from our Web site at www.siba.fi/PWGL/. ENP has a sophisticated GUI based on direct editing, a rich set of musical primitives, an extended concept of expressions, and algorithmic control over scores. It contains a unique set of features that cannot be found elsewhere, at least to this extent of integration in one program. ENP allows one to analyze, modify, view, and annotate scores in many different ways; a text-based format can be used to generate scores algorithmically; it is possible to modify data contained by the objects in a score by using a scripting language; our rule-based programming language can be used Computer Music Journal, 30:4, pp. 67-79, Winter 2006 ? 2006 Massachusetts Institute of Technology.
Journal of the Acoustical Society of America | 2006
Henri Penttinen; Jyri Pakarinen; Vesa Välimäki; Mikael Laurson; Henbing Li; Marc Leman
This paper presents a model-based sound synthesis algorithm for the Chinese plucked string instrument called the guqin. The instrument is fretless, which enables smooth pitch glides from one note to another. A version of the digital waveguide synthesis approach is used, where the string length is time-varying and its energy is scaled properly. A body model filter is placed in cascade with the string model. Flageolet tones are synthesized with the so-called ripple filter structure, which is an FIR comb filter in the delay line of a digital waveguide model. In addition, signal analysis of recorded guqin tones is presented. Friction noise produced by gliding the finger across the soundboard has a harmonic structure and is proportional to the gliding speed. For pressed tones, one end of a vibrating string is terminated either by the nail of the thumb or a fingertip. The tones terminated with a fingertip decay faster than those terminated with a thumb. Guqin tones are slightly inharmonic and they exhibit phantom partials. The synthesis model takes into account these characteristic features of the instrument and is able to reproduce them. The synthesis model will be used for rule based synthesis of guqin music.
EURASIP Journal on Advances in Signal Processing | 2004
Vesa Välimäki; Henri Penttinen; Jonte Knif; Mikael Laurson; Cumhur Erkut
A sound synthesis algorithm for the harpsichord has been developed by applying the principles of digital waveguide modeling. A modification to the loss filter of the string model is introduced that allows more flexible control of decay rates of partials than is possible with a one-pole digital filter, which is a usual choice for the loss filter. A version of the commuted waveguide synthesis approach is used, where each tone is generated with a parallel combination of the string model and a second-order resonator that are excited with a common excitation signal. The second-order resonator, previously proposed for this purpose, approximately simulates the beating effect appearing in many harpsichord tones. The characteristic key-release thump terminating harpsichord tones is reproduced by triggering a sample that has been extracted from a recording. A digital filter model for the soundboard has been designed based on recorded bridge impulse responses of the harpsichord. The output of the string models is injected in the soundboard filter that imitates the reverberant nature of the soundbox and, particularly, the ringing of the short parts of the strings behind the bridge.
Computer Music Journal | 2003
Vesa Välimäki; Mikael Laurson; Cumhur Erkut
Computer Music Journal, 27:1, pp. 71–82, Spring 2003 2003 Massachusetts Institute of Technology. The clavichord is one of the oldest keyboard instruments, and it is still often used in performances and recordings of Renaissance and Baroque music. The sound of the instrument is pleasant and expressive but quiet. Consequently, the instrument can only be used in intimate performances for small audiences. This is the main reason why the clavichord was replaced by the harpsichord and finally by the modern piano, both of which produce a considerably louder output. Attempts have been made to amplify the sound of the clavichord using a piezoelectric pickup (Burhans 1973). One of our motivations in this research is to give the clavichord a new life in the digital world, where the faint sound level of the instrument can be amplified by simply turning a volume knob. The suggested synthesis model is based on digital waveguide modeling of string instruments (Smith 1992, 1998; Valimaki et al. 1996; Karjalainen, Valimaki, and Tolonen 1998) and uses the principle of commuted waveguide synthesis where the soundbox’s response is incorporated in the excitation signal (Smith 1993; Karjalainen and Valimaki 1993; Karjalainen, Valimaki, and Janosy 1993). Special sampling techniques are also employed. Musical examples produced using the proposed synthesizer will be included on a forthcoming Computer Music Journal CD.
Computer Music Journal | 2008
Jukka Rauhala; Mikael Laurson; Vesa Välimäki; Heidi-Maria Lehtonen; Vesa Norilo
Synthesizer Jukka Rauhala,∗ Mikael Laurson,† Vesa Valimaki,∗ Heidi-Maria Lehtonen,∗ and Vesa Norilo† ∗Department of Signal Processing and Acoustics Helsinki University of Technology P.O. Box 3000, FI-02015 TKK, Espoo, Finland www.acoustics.hut.fi [email protected], {vesa.valimaki, heidi-maria.lehtonen} @tkk.fi †Centre for Music and Technology Sibelius Academy P.O. Box 86, FI-00251, Helsinki, Finland cmt.siba.fi {laurson, vnorilo}@siba.fi
Journal of New Music Research | 2001
Matti Karjalainen; Tero Tolonen; Vesa Välimäki; Cumhur Erkut; Mikael Laurson; Jarmo Hiipakka
Physical modeling and model-based sound synthesis have recently been among the most active topics of computer music and audio research. In the modeling approach one typically tries to simulate and duplicate the most prominent sound generation properties of the acoustic musical instrument under study. If desired, the models developed may then be modified in order to create sounds that are not common or even possible from physically realizable instruments. In addition to physically related principles it is possible to combine physical models with other synthesis and signal processing methods to realize hybrid modeling techniques. This article gives an overview of some recent results in model-based sound synthesis and related signal processing techniques. The focus is on modeling and synthesizing plucked string sounds, although the techniques may find much more widespread application. First, as a background, an advanced linear model of the acoustic guitar is discussed along with model control principles. Then the methodology to include inherent nonlinearities and time-varying features is introduced. Examples of nonlinearities are studied in the context of two string instruments, the kantele and the tanbur, which exhibit interesting nonlinear effects.
computer music modeling and retrieval | 2009
Mika Kuuskankare; Mikael Laurson
In this paper we explore the potential of Expressive Notation Package (ENP) in representing electroacoustic scores. We use the listening score by Rainer Wehinger---originally created for Gyorgy Ligetis electronic piece Articulation ---as reference material. Our objective is to recreate a small excerpt form the score using an ENP tool called Expression Designer (ED). ED allows the user to create new graphical expressions with programmable behavior. In this paper we aim to demonstrate how a collection of specific graphic symbols found in Articulation can be implemented using ED. We also discuss how this information can be later manipulated and accessed for the needs of a sonic realization, for example.