Alexis Kirke
Plymouth State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexis Kirke.
ACM Computing Surveys | 2009
Alexis Kirke; Eduardo Reck Miranda
We present a survey of research into automated and semiautomated computer systems for expressive performance of music. We will examine the motivation for such systems and then examine the majority of the systems developed over the last 25 years. To highlight some of the possible future directions for new research, the review uses primary terms of reference based on four elements: testing status, expressive representation, polyphonic ability, and performance creativity.
Neuroscience Letters | 2014
Ian Daly; Asad Malik; Faustina Hwang; Etienne B. Roesch; James Weaver; Alexis Kirke; Duncan Williams; Eduardo Reck Miranda; Slawomir J. Nasuto
This paper presents an EEG study into the neural correlates of music-induced emotions. We presented participants with a large dataset containing musical pieces in different styles, and asked them to report on their induced emotional responses. We found neural correlates of music-induced emotion in a number of frequencies over the pre-frontal cortex. Additionally, we found a set of patterns of functional connectivity, defined by inter-channel coherence measures, to be significantly different between groups of music-induced emotional responses.
Archive | 2012
Alexis Kirke; Eduardo Reck Miranda
This book discusses all aspects of computing for expressive performance, from the history of CSEMPs to the very latest research, in addition to discussing the fundamental ideas, and key issues and directions for future research. Topics and features: includes review questions at the end of each chapter; presents a survey of systems for real-time interactive control of automatic expressive music performance, including simulated conducting systems; examines two systems in detail, YQX and IMAP, each providing an example of a very different approach; introduces techniques for synthesizing expressive non-piano performances; addresses the challenges found in polyphonic music expression, from a statistical modelling point of view; discusses the automated analysis of musical structure, and the evaluation of CSEMPs; describes the emerging field of embodied expressive musical performance, devoted to building robots that can expressively perform music with traditional instruments.
Psychology of Music | 2015
Duncan Williams; Alexis Kirke; Eduardo Reck Miranda; Etienne B. Roesch; Ian Daly; Slawomir J. Nasuto
There has been a significant amount of work implementing systems for algorithmic composition with the intention of targeting specific emotional responses in the listener, but a full review of this work is not currently available. This gap creates a shared obstacle to those entering the field. Our aim is thus to give an overview of progress in the area of these affectively driven systems for algorithmic composition. Performative and transformative systems are included and differentiated where appropriate, highlighting the challenges these systems now face if they are to be adapted to, or have already incorporated, some form of affective control. Possible real-time applications for such systems, utilizing affectively driven algorithmic composition and biophysical sensing to monitor and induce affective states in the listener are suggested.
Brain and Cognition | 2015
Ian Daly; Duncan Williams; James Hallowell; Faustina Hwang; Alexis Kirke; Asad Malik; James Weaver; Eduardo Reck Miranda; Slawomir J. Nasuto
It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participants music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).
Computer Music Journal | 2010
Eduardo Reck Miranda; Alexis Kirke; Qijun Zhang
This chapter introduces an imitative multi-agent system approach to generate expressive performances of music, based on agents’ individual parameterized musical rules. We have developed a system called IMAP (imitative multi-agent performer). Aside from investigating the usefulness of such an application of the imitative multi-agent paradigm, there is also a desire to investigate the inherent feature of diversity and control of diversity in this methodology: a desirable feature for a creative application, such as musical performance. To aid this control of diversity, parameterized rules are utilized based on previous expressive performance research. These are implemented in the agents using previously developed musical analysis algorithms. When experiments are run, it is found that agents are expressing their preferences through their music performances and that diversity can be generated and controlled.
Journal of Neural Engineering | 2016
Ian Daly; Duncan Williams; Alexis Kirke; James Weaver; Asad Malik; Faustina Hwang; Eduardo Reck Miranda; Slawomir J. Nasuto
OBJECTIVE We aim to develop and evaluate an affective brain-computer music interface (aBCMI) for modulating the affective states of its users. APPROACH An aBCMI is constructed to detect a users current affective state and attempt to modulate it in order to achieve specific objectives (for example, making the user calmer or happier) by playing music which is generated according to a specific affective target by an algorithmic music composition system and a case-based reasoning system. The system is trained and tested in a longitudinal study on a population of eight healthy participants, with each participant returning for multiple sessions. MAIN RESULTS The final online aBCMI is able to detect its users current affective states with classification accuracies of up to 65% (3 class, [Formula: see text]) and modulate its users affective states significantly above chance level [Formula: see text]. SIGNIFICANCE Our system represents one of the first demonstrations of an online aBCMI that is able to accurately detect and respond to users affective states. Possible applications include use in music therapy and entertainment.
international conference of the ieee engineering in medicine and biology society | 2014
Ian Daly; James Hallowell; Faustina Hwang; Alexis Kirke; Asad Malik; Etienne B. Roesch; James Weaver; Duncan Williams; Eduardo Reck Miranda; Slawomir J. Nasuto
The neural mechanisms of music listening and appreciation are not yet completely understood. Based on the apparent relationship between the beats per minute (tempo) of music and the desire to move (for example feet tapping) induced while listening to that music it is hypothesised that musical tempo may evoke movement related activity in the brain. Participants are instructed to listen, without moving, to a large range of musical pieces spanning a range of styles and tempos during an electroencephalogram (EEG) experiment. Event-related desynchronisation (ERD) in the EEG is observed to correlate significantly with the variance of the tempo of the musical stimuli. This suggests that the dynamics of the beat of the music may induce movement related brain activity in the motor cortex. Furthermore, significant correlations are observed between EEG activity in the alpha band over the motor cortex and the bandpower of the music in the same frequency band over time. This relationship is observed to correlate with the strength of the ERD, suggesting entrainment of motor cortical activity relates to increased ERD strength.
Archive | 2013
Alexis Kirke; Eduardo Reck Miranda
Pulsed Melodic Processing (PMP) is a computation protocol that utilizes musically-based pulse sets (“melodies”) for processing – capable of representing the arousal and valence of affective states. Affective processing and affective input/output are key tools in artificial intelligence and computing. In the designing of processing elements (e.g. bits, bytes, floats, etc.), engineers have primarily focused on the processing efficiency and power. They then go on to investigate ways of making them perceivable by the user/engineer. However Human-Computer Interaction research – and the increasing pervasiveness of computation in our daily lives – supports a complementary approach in which computational efficiency and power are more balanced with understandability to the user/engineer. PMP allows a user to tap into the processing path to hear a sample of what is going on in that affective computation, as well as providing a simpler way to interface with affective input/output systems. This requires the developing of new approaches to processing and interfacing PMP-based modules. In this chapter we introduce PMP and examine the approach using three example: a military robot team simulation with an affective subsystem, a text affective-content estimation system, and a stock market tool.
Brain-Computer Interfaces | 2014
Ian Daly; Duncan Williams; Faustina Hwang; Alexis Kirke; Asad Malik; Etienne B. Roesch; James Weaver; Eduardo Reck Miranda; Slawomir J. Nasuto
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.