Duncan Williams
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Duncan Williams.
Neuroscience Letters | 2014
Ian Daly; Asad Malik; Faustina Hwang; Etienne B. Roesch; James Weaver; Alexis Kirke; Duncan Williams; Eduardo Reck Miranda; Slawomir J. Nasuto
This paper presents an EEG study into the neural correlates of music-induced emotions. We presented participants with a large dataset containing musical pieces in different styles, and asked them to report on their induced emotional responses. We found neural correlates of music-induced emotion in a number of frequencies over the pre-frontal cortex. Additionally, we found a set of patterns of functional connectivity, defined by inter-channel coherence measures, to be significantly different between groups of music-induced emotional responses.
Psychology of Music | 2015
Duncan Williams; Alexis Kirke; Eduardo Reck Miranda; Etienne B. Roesch; Ian Daly; Slawomir J. Nasuto
There has been a significant amount of work implementing systems for algorithmic composition with the intention of targeting specific emotional responses in the listener, but a full review of this work is not currently available. This gap creates a shared obstacle to those entering the field. Our aim is thus to give an overview of progress in the area of these affectively driven systems for algorithmic composition. Performative and transformative systems are included and differentiated where appropriate, highlighting the challenges these systems now face if they are to be adapted to, or have already incorporated, some form of affective control. Possible real-time applications for such systems, utilizing affectively driven algorithmic composition and biophysical sensing to monitor and induce affective states in the listener are suggested.
Brain and Cognition | 2015
Ian Daly; Duncan Williams; James Hallowell; Faustina Hwang; Alexis Kirke; Asad Malik; James Weaver; Eduardo Reck Miranda; Slawomir J. Nasuto
It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participants music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).
Journal of Neural Engineering | 2016
Ian Daly; Duncan Williams; Alexis Kirke; James Weaver; Asad Malik; Faustina Hwang; Eduardo Reck Miranda; Slawomir J. Nasuto
OBJECTIVE We aim to develop and evaluate an affective brain-computer music interface (aBCMI) for modulating the affective states of its users. APPROACH An aBCMI is constructed to detect a users current affective state and attempt to modulate it in order to achieve specific objectives (for example, making the user calmer or happier) by playing music which is generated according to a specific affective target by an algorithmic music composition system and a case-based reasoning system. The system is trained and tested in a longitudinal study on a population of eight healthy participants, with each participant returning for multiple sessions. MAIN RESULTS The final online aBCMI is able to detect its users current affective states with classification accuracies of up to 65% (3 class, [Formula: see text]) and modulate its users affective states significantly above chance level [Formula: see text]. SIGNIFICANCE Our system represents one of the first demonstrations of an online aBCMI that is able to accurately detect and respond to users affective states. Possible applications include use in music therapy and entertainment.
international conference of the ieee engineering in medicine and biology society | 2014
Ian Daly; James Hallowell; Faustina Hwang; Alexis Kirke; Asad Malik; Etienne B. Roesch; James Weaver; Duncan Williams; Eduardo Reck Miranda; Slawomir J. Nasuto
The neural mechanisms of music listening and appreciation are not yet completely understood. Based on the apparent relationship between the beats per minute (tempo) of music and the desire to move (for example feet tapping) induced while listening to that music it is hypothesised that musical tempo may evoke movement related activity in the brain. Participants are instructed to listen, without moving, to a large range of musical pieces spanning a range of styles and tempos during an electroencephalogram (EEG) experiment. Event-related desynchronisation (ERD) in the EEG is observed to correlate significantly with the variance of the tempo of the musical stimuli. This suggests that the dynamics of the beat of the music may induce movement related brain activity in the motor cortex. Furthermore, significant correlations are observed between EEG activity in the alpha band over the motor cortex and the bandpower of the music in the same frequency band over time. This relationship is observed to correlate with the strength of the ERD, suggesting entrainment of motor cortical activity relates to increased ERD strength.
Brain-Computer Interfaces | 2015
Joel Eaton; Duncan Williams; Eduardo Reck Miranda
Music as a mechanism for neuro-feedback presents an interesting medium for artistic exploration, especially with regard to passive BCI control. Passive control in a brain-computer music interface (BCMI) provides a means for approximating mental states that can be mapped to select musical phrases, creating a system for real-time musical neuro-feedback. This article presents a BCMI for measuring the affective states of two users, a performer and an audience member, during a live musical performance of the piece titled The Space Between Us. The system adapts to the affective states of the users and selects sequences of a pre-composed musical score. By affect-matching music to mood and subsequently plotting affective musical trajectories across a two-dimensional model of affect, the system attempts to measure the affective interactions of the users, derived from arousal and valence recorded in EEG. An Affective Jukebox, the work of a previous study, validates the method used to read emotions across two dimens...
Brain-Computer Interfaces | 2014
Ian Daly; Duncan Williams; Faustina Hwang; Alexis Kirke; Asad Malik; Etienne B. Roesch; James Weaver; Eduardo Reck Miranda; Slawomir J. Nasuto
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
tests and proofs | 2017
Duncan Williams; Alexis Kirke; Eduardo Reck Miranda; Ian Daly; Faustina Hwang; James Weaver; Slawomir J. Nasuto
Affectively driven algorithmic composition (AAC) is a rapidly growing field that exploits computer-aided composition in order to generate new music with particular emotional qualities or affective intentions. An AAC system was devised in order to generate a stimulus set covering nine discrete sectors of a two-dimensional emotion space by means of a 16-channel feed-forward artificial neural network. This system was used to generate a stimulus set of short pieces of music, which were rendered using a sampled piano timbre and evaluated by a group of experienced listeners who ascribed a two-dimensional valence-arousal coordinate to each stimulus. The underlying musical feature set, initially drawn from the literature, was subsequently adjusted by amplifying or attenuating the quantity of each feature in order to maximize the spread of stimuli in the valence-arousal space before a second listener evaluation was conducted. This process was repeated a third time in order to maximize the spread of valence-arousal coordinates ascribed to the generated stimulus set in comparison to a spread taken from an existing prerated database of stimuli, demonstrating that this prototype AAC system is capable of creating short sequences of music with a slight improvement on the range of emotion found in a stimulus set comprised of real-world, traditionally composed musical excerpts.
tests and proofs | 2015
Duncan Williams; Alexis Kirke; Eduardo Reck Miranda; Ian Daly; James Hallowell; James Weaver; Asad Malik; Etienne B. Roesch; Faustina Hwang; Slawomir J. Nasuto
Affective algorithmic composition is a growing field that combines perceptually motivated affective computing strategies with novel music generation. This article presents work toward the development of one application. The long-term goal is to develop a responsive and adaptive system for inducing affect that is both controlled and validated by biophysical measures. Literature documenting perceptual responses to music identifies a variety of musical features and possible affective correlations, but perceptual evaluations of these musical features for the purposes of inclusion in a music generation system are not readily available. A discrete feature, rhythmic density (a function of note duration in each musical bar, regardless of tempo), was selected because it was shown to be well-correlated with affective responses in existing literature. A prototype system was then designed to produce controlled degrees of variation in rhythmic density via a transformative algorithm. A two-stage perceptual evaluation of a stimulus set created by this prototype was then undertaken. First, listener responses from a pairwise scaling experiment were analyzed via Multidimensional Scaling Analysis (MDS). The statistical best-fit solution was rotated such that stimuli with the largest range of variation were placed across the horizontal plane in two dimensions. In this orientation, stimuli with deliberate variation in rhythmic density appeared farther from the source material used to generate them than from stimuli generated by random permutation. Second, the same stimulus set was then evaluated according to the order suggested in the rotated two-dimensional solution in a verbal elicitation experiment. A Verbal Protocol Analysis (VPA) found that listener perception of the stimulus set varied in at least two commonly understood emotional descriptors, which might be considered affective correlates of rhythmic density. Thus, these results further corroborate previous studies wherein musical parameters are monitored for changes in emotional expression and that some similarly parameterized control of perceived emotional content in an affective algorithmic composition system can be achieved and provide a methodology for evaluating and including further possible musical features in such a system. Some suggestions regarding the test procedure and analysis techniques are also documented here.
computer science and electronic engineering conference | 2015
Ian Daly; Asad Malik; James Weaver; Faustina Hwang; Slawomir J. Nasuto; Duncan Williams; Alexis Kirke; Eduardo Reck Miranda
An affectively driven music generation system is described and evaluated. The system is developed for the intended eventual use in human-computer interaction systems such as brain-computer music interfaces. It is evaluated for its ability to induce changes in a listeners affective state. The affectively-driven algorithmic composition system was used to generate a stimulus set covering 9 discrete sectors of a 2-dimensional affective space by means of a 16 channel feedforward artificial neural network. This system was used to generate 90 short pieces of music with specific affective intentions, 10 stimuli for each of the 9 sectors in the affective space. These pieces were played to 20 healthy participants, and it was observed that the music generation system induced the intended affective states in the participants. This is further verified by inspecting the galvanic skin response recorded from participants.