Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stuart Cunningham is active.

Publication


Featured researches published by Stuart Cunningham.


audio mostly conference | 2011

Identification of perceptual qualities in textural sounds using the repertory grid method

Thomas Grill; Arthur Flexer; Stuart Cunningham

This paper is about exploring which perceptual qualities are relevant to people listening to textural sounds. Knowledge about those personal constructs shall eventually lead to more intuitive interfaces for browsing large sound libraries. By conducting mixed qualitative-quantitative interviews within the repertory grid framework ten bi-polar qualities are identified. A subsequent web-based study yields measures for inter-rater agreement and mutual similarity of the perceptual qualities based on a selection of 100 textural sounds. Additionally, some initial experiments are conducted to test standard audio descriptors for their correlation with the perceptual qualities.


audio mostly conference | 2010

Applying personal construct psychology in sound design using a repertory grid

Stuart Cunningham

This paper highlights the repertory grid technique as a mechanism for use in multimedia assessment and classification with a particular focus of its use with audio media and in sound design tasks. The paper describes the repertory grid method and provides an original, small scale, investigation as an example of the technique in action. The results show that repertory grid can indeed be employed with multimedia elements, something that is currently not common practice in the field, and that there is value in being able to classify media elements by extracting group norms and semantic descriptions.


audio mostly conference | 2013

A discussion of musical features for automatic music playlist generation using affective technologies

Darryl Griffiths; Stuart Cunningham; Jonathan Weinel

This paper discusses how human emotion could be quantified using contextual and physiological information that has been gathered from a range of sensors, and how this data could then be used to automatically generate music playlists. The work is very much in progress and this paper details what has been done so far and plans for experiments and feature mapping to validate the concept in real-world scenarios. We begin by discussing existing affective systems that automatically generate playlists based on human emotion. We then consider the current work in audio description analysis. A system is proposed that measures human emotion based on contextual and physiological data using a range of sensors. The sensors discussed to invoke such contextual characteristics range from temperature and light to EDA (electro dermal activity) and ECG (electrocardiogram). The concluding section describes the progress achieved so far, which includes defining datasets using a conceptual design, microprocessor electronics and data acquisition using Matlab. Lastly, there is brief discussion of future plans to develop this research.


cyberworlds | 2015

Multi-disciplinary Creativity and Collaboration: Utilizing Crowd-Accelerated Innovation and the Internet

Stuart Cunningham; Dan Berry; Rae A. Earnshaw; Peter S. Excell; Estelle Thompson

The growth of the creative industries has been a national trend in the UK over the last 5 to 10 years, despite global trends of economic downturn, and has been mirrored on the international stage. A distinguishing feature of the creative sector is its make-up of practitioners from a broad spectrum of disciplines. In addition, creative processes often benefit from the collaboration of partners. Established models and challenges of creativity are assessed and contextualized against the contemporaneous creative industries, which feature multi-disciplinary teams, supported by current technology. Crowd-accelerated developments and creative collaboration via social media are having a transformative effect on the creation, distribution, and exhibition of creative works. They are also having an impact on traditional art and design processes. Multi-user interaction enables location-based art works to be transformed into new kinds of interactive and dynamic experiences for global viewers. Current opportunities, issues and challenges that arise are discussed, and a number of questions are identified for further discussion.


international conference on internet technology and applications | 2017

Application of formal grammar in text mining and construction of an ontology

Anton Kanev; Stuart Cunningham; Terekhov Valery

This work describes an investigation of formal grammar with application to text mining. It is an important area since text is the most widespread type of data and it contains a lot of potentially useful information. Unstructured nature of text requires other methods for its processing, in contrast to other types of data mining. In this work, the authors propose an original approach to text mining by making a parse tree for each sentence using regular grammar and creating an ontology and provide a demonstration of this system being implemented in a constrained scenario. This ontology can be used for different tasks, ranging from expert systems to automatic machine translation. The ontology is a network consisting of concepts linked by relations. The authors developed a new system to implement proposed approach working in different languages.


audio mostly conference | 2016

The Sound of the Smell (and taste) of my Shoes too: Mapping the Senses using Emotion as a Medium

Stuart Cunningham; Jonathan Weinel

This work discusses basic human senses: sight; sound; touch; taste; and smell; and the way in which it may be possible to compensate for lack of one, or more, of these by explicitly representing stimuli using the remaining senses. There may be many situations or scenarios where not all five of these base senses are being stimulated, either because of an optional restriction or deficit or because of a physical or sensory impairment such as loss of sight or touch sensation. Related to this there are other scenarios where sensory matching problems may occur. For example: a user immersed in a virtual environment may have a sense of smell from the real world that is unconnected to the virtual world. In particular, this paper is concerned with how sound can be used to compensate for the lack of other sensory stimulation and vice-versa. As a link is well established already between the visual, touch, and auditory systems, more attention is given to taste and smell, and their relationship with sound. This work presents theoretical concepts, largely oriented around mapping other sensory qualities to sound, based upon existing work in the literature and emerging technologies, to discuss where particular gaps currently exist, how emotion could be a medium to cross-modal representations, and how these might be addressed in future research. It is postulated that descriptive qualities, such as timbre or emotion, are currently the most viable routes for further study and that this may be later integrated with the wider body of research into sensory augmentation.


conference on visual media production | 2015

Immersive RGB+D zoetropic projection for touchscreens and HMDs/wearables

Steven Davies; Stuart Cunningham; Mike Wright

This experiment was designed to capture and create media suitable for manipulation into effective media for testing and evaluation of user interfaces for interaction with immersive media. A Cross Platform Interactive 360ą a zoetropic multimedia interface was developed using RGB+D media using DSLR and Microsoft Kinect hardware. It was found that immersion and control intuition were significantly different amongst participants (n=8) across hardware but that perception of realism did not vary.


Multimedia Tools and Applications | 2014

Data reduction of audio by exploiting musical repetition

Stuart Cunningham; Vic Grout

This paper presents and evaluates a method of audio compression specifically designed to exploit the natural repetition that occurs within musical audio. Our system is entitled Audio Compression Exploiting Repetition (ACER). ACER is a perceptual technique, but one that does not consider exploiting masking, but rather attempts to apply the principles of Lempel-Ziv and run-length encoding, by substituting audio sequences for numeric or character strings. The ACER procedure applies a pseudo exhaustive search process and spectral difference grading. Since ACER exploits musical structure, the amount of data reduction achieved varies from piece-to-piece. The system is described before results on a corpus of material are presented. The analysis shows moderate amounts of data reduction take place whilst the system is operating within parameters designed to maintain high-levels of perceptual audio quality, whilst lower rates of perceptual quality yield greater data reduction. Objective quality evaluations are conducted that reveal degradation in fidelity that is relative to the compression parameters.


audio mostly conference | 2013

Initial objective & subjective evaluation of a similarity-based audio compression technique

Stuart Cunningham; Jonathan Weinel; Shaun Roberts; Vic Grout; Darryl Griffiths

In this paper, we undertake an initial study evaluation of a recently developed audio compression approach; Audio Compression Exploiting Repetition (ACER). This is a novel compression method that employs dictionary-based techniques to encode repetitive musical sequences that naturally occur within musical audio. As such, it is a lossy compression technique that exploits human perception to achieve data reduction. To evaluate the output from the ACER approach, we conduct a pilot evaluation of the ACER coded audio, by employing both objective and subjective testing, to validate the ACER approach. Results show that the ACER approach is capable of producing compressed audio that varies in subjective and objective and quality grades that are inline with the amount of compression desired; configured by setting a similarity threshold value. Several lessons are learned and suggestion given as to how a larger, enhanced series of listening tests will be taken forward in future, as a direct result of the work presented in this paper.


Archive | 2012

e-Culture and m-Culture: The Way that Electronic, Computing and Mobile Devices are Changing the Nature of Art, Design and Culture

Stuart Cunningham; Peter S. Excell

We are now becoming used to the notion that electronic devices and communications systems are creating new categories of art forms, such as television programs, computer-generated special effects in movies and electronically enhanced music. However, we need to be aware that this process is accelerating and is likely to have an even larger significance in the future than has already been the case. Computer games, for example, while apparently being just an ephemeral way to use a little spare time, are actually an embryonic new art form, since they require detailed design and creative input analogous to computer-generated movies and special effects, but with the great difference that many different options must be available, depending on the choices of the user. They can thus be seen as a new type of interactive art form which can also be interpreted from the technological standpoint as a very rich form of interaction between the human being and a relatively powerful computer. Just as, in their early days, movie films, still photographs and television programs were originally seen as trivial and ephemeral, but were later recognized as having long-term artistic merit, the same will surely apply to computer games and to other forms of human interaction with computers. This can be termed ‘e-culture’.

Collaboration


Dive into the Stuart Cunningham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Kanev

Bauman Moscow State Technical University

View shared research outputs
Top Co-Authors

Avatar

Boris Kiselev

National Research Nuclear University MEPhI

View shared research outputs
Top Co-Authors

Avatar

Diana Suprun

Bauman Moscow State Technical University

View shared research outputs
Top Co-Authors

Avatar

Mikhail Yuriev

National Research Nuclear University MEPhI

View shared research outputs
Top Co-Authors

Avatar

Terekhov Valery

Bauman Moscow State Technical University

View shared research outputs
Top Co-Authors

Avatar

Viacheslav Yakutenko

National Research Nuclear University MEPhI

View shared research outputs
Researchain Logo
Decentralizing Knowledge