Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Polfreman is active.

Publication


Featured researches published by Richard Polfreman.


Organised Sound | 1999

A task analysis of music composition and its application to the development of Modalyser

Richard Polfreman

This paper presents an overview of a generic task model of music composition, developed as part of a research project investigating methods of improving user-interface designs for music software (in particular focusing on sound synthesis tools). The task model has been produced by applying recently developed task analysis techniques to the complex and creative task of music composition. The model itself describes the purely practical aspects of music composition, avoiding any attempt to include the aesthetic motivations and concerns of composers. We go on to illustrate the application of the task model to software design by describing various parts of Modalyser, a graphical user-interface program designed by the author for creating musical sounds with IRCAMs Modalys physical modelling synthesis software. The task model is not yet complete at all levels and requires further refinement, but is deemed to be sufficiently comprehensive to merit presentation here. Although developed for assisting in software design, the task model may be of wider interest to those concerned with the education of music composition and research into music composition generally. This paper has been developed from a short presentation given at the First Sonic Arts Network Conference in January 1998.


Organised Sound | 2006

Time to re-wire? Problems and strategies for the maintenance of live electronics

Richard Polfreman; David Sheppard; Ian Dearden

While much work is proceeding with regard to the preservation and restoration of audio documents in general and compositions for tape in particular, relatively little research has been published with regard to the issues of preserving compositions for live electronics. Such works often involve a distinct performance element difficult to capture in a single recording, and it is typically only in performance that such works can be experienced as the composer intended. However, performances can become difficult or even impossible to present over time due to data and/or equipment issues. Sustainability here therefore refers to the effective recording of all the information necessary to set up the live electronics for a performance. Equally, it refers to the availability of appropriate devices, as rapid technological change soon makes systems obsolete and manufacturers discontinue production. The authors have had a range of experience re-working performances over a number of years, including compositions by Luigi Nono and Jonathan Harvey, amongst others. In this paper we look at the problem as a whole, focusing on Jonathan Harveys works with electronic elements, which span some twenty-six years, as exemplars of the types of problems involved.


Organised Sound | 2002

Modalys-ER for OpenMusic (MfOM): virtual instruments and virtual musicians

Richard Polfreman

Modalys-ER is a graphical environment for creating physical model instruments and generating musical sounds with them. While Modalys-ER provides users with a relatively simple-to-use interface, it has only limited methods for mapping control data onto model parameters for performance. While these are sufficient for many interesting applications, they do not bridge the gap from high-level specifications such as MIDI files or Standard Western Notation (SWN) down to low-level parameters within the physical model. With this issue in mind, a part of Modalys-ER has now been ported to OpenMusic, providing a platform for developing more sophisticated automation and control systems that can be specified through OpenMusics visual programming interface. An overview of the MfOM library is presented and illustrated with several musical examples using some early mapping designs. Also, some of the issues relating to building and controlling virtual instruments are discussed and future directions for research in this area are suggested. The first release is now available via the IRCAM Software Forum.


international conference on acoustics, speech, and signal processing | 2010

Towards effective singing voice extraction from stereophonic recordings

Stratis Sofianos; Aladdin M. Ariyaeeinia; Richard Polfreman

Extracting a singing voice from its music accompaniment can significantly facilitate certain applications of Music Information Retrieval including singer identification and singing melody extraction. In this paper, we present a hybrid approach for this purpose, which combines properties of the Azimuth Discrimination and Resynthesis (ADRess) method with Independent Component Analysis (ICA). Our proposed approach is developed specifically for the case of singing voice separation from stereophonic recordings. The paper presents the characteristics of the proposed method and details an objective evaluation of its effectiveness.


Organised Sound | 2001

Interpolator: a two-dimensional graphical interpolation system for the simultaneous control of digital signal processing parameters

Martin Spain; Richard Polfreman

The musical use of realtime digital audio tools implies the need for simultaneous control of a large number of parameters to achieve the desired sonic results. Often it is also necessary to be able to navigate between certain parameter configurations in an easy and intuitive way, rather than to precisely define the evolution of the values for each parameter. Graphical interpolation systems (GIS) provide this level of control by allocating objects within a visual control space to sets of parameters that are to be controlled, and using a moving cursor to change the parameter values according to its current position within the control space. This paper describes Interpolator, a two-dimensional interpolation system for controlling digital signal processing (DSP) parameters in real time.


Cochlear Implants International | 2015

Participatory design of a music aural rehabilitation programme

Rachel M. van Besouw; Benjamin Oliver; Sarah Hodkinson; Richard Polfreman; M.L. Grasmeder

Abstract Objectives Many cochlear implant (CI) users wish to enjoy music but are dissatisfied by its quality as perceived through their implant. Although there is evidence to suggest that training can improve CI users’ perception and appraisal of music, availability of interactive music-based aural rehabilitation for adults is limited. In response to this need, an ‘Interactive Music Awareness Programme’ (IMAP) was developed with and for adult CI users. Methods An iterative design and evaluation approach was used. The process began with identification of user needs through consultations, followed by use of mock-up applications in workshops. Feedback from these were used to develop the prototype IMAP; a programme of 24 interactive sessions, enabling users to create and manipulate music. The prototype IMAP was subsequently evaluated in a home trial with 16 adult CI users over a period of 12 weeks. Results Overall ratings for the prototype IMAP were positive and indicated that it met users’ needs. Quantitative and qualitative feedback on the sessions and software in the prototype IMAP were used to identify aspects of the programme that worked well and aspects that required improvement. The IMAP was further developed in response to users’ feedback and is freely available online. Conclusions The participatory design approach used in developing the IMAP was fundamental in ensuring its relevance, and regular feedback from end users in each phase of development proved valuable for early identification of issues. Observations and feedback from end users supported a holistic approach to music aural rehabilitation.


international symposium/conference on music information retrieval | 2001

Sound spotting: a frame-based approach

Christian Spevak; Richard Polfreman


Journal of The Audio Engineering Society | 2012

H-Semantics : A Hybrid Approach to Singing Voice Separation

Stratis Sofianos; Aladdin M. Ariyaeeinia; Richard Polfreman; Reza Sotudeh


international symposium/conference on music information retrieval | 2009

Integrating musicology's heterogeneous data sources for better exploration

David Bretherton; Daniel Alexander Smith; m.c. schraefel; Richard Polfreman; Mark Everist; Jeanice Brooks; Joe Lambert


international computer music conference | 2005

RE-WIRED : REWORKING 20TH CENTURY LIVE ELECTRONICS FOR TODAY

Richard Polfreman; David Sheppard; Ian Dearden

Collaboration


Dive into the Richard Polfreman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe Lambert

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Mark Everist

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

m.c. schraefel

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Christian Spevak

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stratis Sofianos

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Benjamin Oliver

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge