Anders Friberg
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anders Friberg.
Journal of the Acoustical Society of America | 1995
Anders Friberg; Johan Sundberg
In acoustic communication timing seems to be an exceedingly important aspect. The just noticeable difference (jnd) for small perturbations of an isochronous sequence of sounds is particularly important in music, in which such sequences frequently occur. This article reviews the literature in the area and presents an experiment designed to resolve some conflicting results in the literature regarding the tempo dependence for quick tempi and relevance of music experience. The jnd for a perturbation of the timing of a tone appearing in an isochronous sequence was examined by the method of adjustment. Thirty listeners of varied musical background were asked to adjust the position of the fourth tone in a sequence of six, such that they heard the sequence as perfectly isochronous. The tones were presented at a constant interonset time that was varied between 100 and 1000 ms. The absolute jnd was found to be approximately constant at 6 ms for tone interonset intervals shorter than about 240 ms and the relative jnd constant at 2.5% of the tone interonsets above 240 ms. Subjects’ musical training did not affect these values. Comparison with previous work showed that a constant absolute jnd below 250 ms and constant relative jnd above 250 ms tend to appear regardless of the perturbation type, at least if the sequence is relatively short.
Journal of the Acoustical Society of America | 1999
Anders Friberg; Johan Sundberg
This investigation explores the common assumption that music and motion are closely related by comparing the stopping of running and the termination of a piece of music. Video recordings were made ...
Computer Music Journal | 1991
Anders Friberg
This is a detailed technical presentation of performance rules resulting from a project in which music performance has been analyzed by means of an analysis-by-synthesis procedure (Sundberg et al. 1983). The rules have been developed in cooperation with Johan Sundberg and Lars Fryd6n. Lars Fryd6n is the musical expert, Johan Sundberg has contributed mostly to the cognitive aspects and, in an early stage of the project, to programming, and the author has been responsible for the organization and formulation of the final programming. The project has been extensively described and discussed in previous articles (see References). The purpose of the rules is to convert the written score, complemented with chord symbols and phrase markers, to a musically-acceptable performance. They are currently implemented in the programming language Lisp on a Macintosh computer (Friberg and Sundberg 1986) and the software is available on request. The rules operate on the parameters listed in Table 1. Two different tone articulation models are used. The first model uses only off-time duration (DRO), as defined in Fig. 1. The second model is more complete and uses a four-point envelope (T1-T4 and LI-L4) to shape each tone individually (see Fig. 16). Whenever possible, the resulting deviations from the rules are additive. This means that each tone may be processed by several rules, and the deviations made from each rule will be added successively to the parameters of that tone. The order in which the rules are applied is in general not critical except for the synchronization rules and the amplitude envelope rules which have to be applied last. The mixed intonation rule must also be applied after all other intonation rules. The sound parameters that are manipulated by the rules are listed in Table 1. For technical reasons, the rules will be presented in this article in an order based on the resulting parameter changes. There are five groups of rules: (1) single parameter rules,
Musicae Scientiae | 2001
Patrik N. Juslin; Anders Friberg; Roberto Bresin
This article presents a computational model of expression in music performance: the GERM model. The purpose of the GERM model is to (a) describe the principal sources of variability in music performance, (b) emphasize the need to integrate different aspects of performance in a common model, and (c) provide some preliminaries (germ = a basis from which a thing may develop) for a computational model that simulates the different aspects. Drawing on previous research on performance, we propose that performance expression derives from four main sources of variability: (1) Generative Rules, which function to convey the generative structure in a musical manner (e.g., Clarke, 1988; Sundberg, 1988); (2) Emotional Expression, which is governed by the performers expressive intention (e.g., Juslin, 1997a); (3) Random Variations, which reflect internal timekeeper variance and motor delay variance (e.g., Gilden, 2001; Wing and Kristofferson, 1973); and (4) Movement Principles, which prescribe that certain features of the performance are shaped in accordance with biological motion (e.g., Shove and Repp, 1995). A preliminary version of the GERM model was implemented by means of computer synthesis. Synthesized performances were evaluated by musically trained participants in a listening test. The results from the test support a decomposition of expression in terms of the GERM model. Implications for future research on music performance are discussed.
Computer Music Journal | 2000
Anders Friberg; Vittorio Colombo; Lars Frydén; Johan Sundberg
Director Musices is a program that transforms notated scores into musical performances. It implements the performance rules emerging from research projects at the Royal Institute of Technology (KTH). Rules in the program model performance aspects such as phrasing, articulation, and intonation, and they operate on performance variables such as tone, inter-onset duration, amplitude, and pitch. By manipulating rule parameters, the user can act as a metaperformer controlling different features of the performance, leaving the technical execution to the computer. Different interpretations of the same piece can easily be obtained. Features of Director Musices include MIDI file input and output, rule palettes, graphical display of all performance variables (along with music notation), and user-defined performance rules. The program is implemented in Common Lisp and is available free as a stand-alone application both for Macintosh and Windows platforms. Further information, including music examples, publications, and the software itself, is located online at http:// www.speech.kth.se/music/performance/.
Journal of New Music Research | 1998
Anders Friberg; Roberto Bresin; Lars Frydén; Johan Sundberg
In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented th ...
Journal of New Music Research | 2000
Anders Friberg; Johan Sundberg; Lars Frydén
The common association of music with motion was investigated in a direct way. Could the original motion quality of different gaits be transferred to music and be perceived by a listener? Measurements of the ground reaction force by the foot during different gaits were transferred to sound by using the vertical force curve as sound level envelopes for tones played at different tempi. Three listening experiments assessed the motion quality of the resulting stimuli. In the first experiment, where the listeners were asked to freely describe the tones, 25% of answers were direct references to motion; such answers were more frequent at faster tempi. In the second experiment, where the listeners were asked to describe the motion quality, about half of the answers directly related to motion could be classified as belonging to one of the categories of dancing, jumping, running, walking, or stumbling. Most gait patterns were clearly classified as belonging to one of these categories, independent of presentation tempo. In the third experiment, the listeners were asked to rate the stimuli on 24 adjective scales. A factor analysis yielded four factors that could be interpreted as Swift vs. Solemn (factor 1), Graceful vs. Stamping (factor 2), Limping vs. Forceful (factor 3), and Springy (factor 4, no contrasting adjective). The results from the three experiments were consistent and indicated that each tone (corresponding to a particular gait) could clearly be categorised in terms of motion.
Computer Music Journal | 1991
Anders Friberg; Lars Frydén; Lars-Gunnar Bodin; Johan Sundberg
A computer program for synthesis of music performance, originally developed for traditional tonal music by means of an analysis-by-synthesis strategy, is applied to contemporary piano music as well ...
Journal of Experimental Psychology: Applied | 2006
Patrik N. Juslin; Jessika Karlsson; Erik Lindström; Anders Friberg; Erwin Schoonderwaldt
Communication of emotions is of crucial importance in music performance. Yet research has suggested that this skill is neglected in music education. This article presents and evaluates a computer program that automatically analyzes music performances and provides feedback to musicians in order to enhance their communication of emotions. Thirty-six semi-professional jazz /rock guitar players were randomly assigned to one of 3 conditions: (1) feedback from the computer program, (2) feedback from music teachers, and (3) repetition without feedback. Performance measures revealed the greatest improvement in communication accuracy for the computer program, but usability measures indicated that certain aspects of the program could be improved. Implications for music education are discussed.
Contemporary Music Review | 1989
Johan Sundberg; Anders Friberg; Lars Frydén
Recently developed parts of a computer program are presented that contain a rule system which automatically converts music scores to musical performance, and which, in a sense, can be regarded as a ...