Rory A. Lewis
University of Colorado Colorado Springs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rory A. Lewis.
international syposium on methodologies for intelligent systems | 2005
Alicja Wieczorkowska; Piotr Synak; Rory A. Lewis; Zbigniew W. Raś
Music is not only a set of sounds, it evokes emotions, subjectively perceived by listeners. The growing amount of audio data available on CDs and in the Internet wakes up a need for content-based searching through these files. The user may be interested in finding pieces in a specific mood. The goal of this paper is to elaborate tools for such a search. A method for the appropriate objective description (parameterization) of audio files is proposed, and experiments on a set of music pieces are described. The results are summarized in concluding chapter.
multimedia and ubiquitous engineering | 2007
Alicja Wieczorkowska; Zbigniew W. Ras; Xin Zhang; Rory A. Lewis
Musical instrument sounds can be classified in various ways, depending on the instrument or articulation classification. This paper presents a number of possible generalizations of musical instruments sounds classification which can be used to construct different hierarchical decision attributes. Each decision attribute will lead us to a new classifier and the same to a different system for automatic indexing of music by instrument sounds and their generalizations. Values of a decision attribute and their generalizations are used to construct atomic queries of a query language built for retrieving musical objects from MIR database (see http://www.mir.uncc.edu). When query fails, the cooperative strategy will try to find its lowest generalization which does not fail, taking into consideration all available hierarchical attributes. Thus, the music object representing most similar object in the database is returned as the query answer. This paper evaluates only two hierarchical attributes upon the same dataset which contains 2628 distinct musical samples of 102 instruments from McGill University Master Samples (MUMS) CD collection.
intelligent information systems | 2005
Alicja Wieczorkowska; Piotr Synak; Rory A. Lewis; Zbigniew W. Ras
Emotions can be expressed in various ways. Music is one of possible media to express emotions. However, perception of music depends on many aspects and is very subjective. This paper focuses on collecting and labelling data for further experiments on discovering emotions in music audio files. The database of more than 300 songs was created and the data were labelled with adjectives. The whole collection represents 13 more detailed or 6 more general classes, covering diverse moods, feelings and emotions expressed in the gathered music pieces.
multimedia and ubiquitous engineering | 2007
Rory A. Lewis; Zbigniew W. Ras
The paper presents the application of classification rules and action rules to scalar music theory, with intent to: (1) describe certain facts in scalar music theory using classification rules and use them to build a system for automatic indexing of music by scale, region, genre, and emotion, (2) use action rules mining to create solutions (automatically generated hints) that permit developers to manipulate a composition by retaining the music score while simultaneously varying emotions it invokes.
RSEISP | 2007
Rory A. Lewis; Alicja Wieczorkowska
In this paper we present methodology of categorization of musical instruments sounds, aiming at the continuing goal of codifying the classiffication of these sounds for automating indexing and retrieval purposes. The proposed categorization is based on numerical parameters. The motivation for this paper is based upon the fallibility of Hornbostel and Sachs generic classiffication scheme, most commonly used for categorization of musical instruments. In eliminating the discrepancies of Hornbostel and Sachs’ classiffication of musical sounds we present a procedure that draws categorization from numerical attributes, describing both time domain and spectrum of sound, rather than using classiffication based directly on Hornbostel and Sachs scheme. As a result we propose a categorization system based upon the empirical musical parameters and then incorporating the resultant structure for classiffication rules.
intelligent data analysis | 2010
Rory A. Lewis; Doron Shmueli; Andrew M. White
This Paper presents a platform to mine epileptiform activity from Electroencephalograms (EEG) by combining the methodologies of Deterministic Finite Automata (DFA) and Knowledge Discovery in Data Mining (KDD) TV-Tree. Mining EEG patterns in human brain dynamics is complex yet necessary for identifying and predicting the transient events that occur before and during epileptic seizures. We believe that an intelligent data analysis of mining EEG Epileptic Spikes can be combined with statistical analysis, signal analysis or KDD to create systems that intelligently choose when to invoke one or more of the aforementioned arts and correctly predict when a person will have a seizure. Herein, we present a correlation platform for using DFA and Action Rules in predicting which interictal spikes within noise are predictors of the clinical onset of a seizure.
granular computing | 2010
Rory A. Lewis; Brian C. Parks; Andrew M. White
This paper is a continuation of the goal to connect power spectra and Deterministic Finite Automata (DFA) in a manner to enhance the detection of spikes and seizures in epileptiform activity from Electroencephalograms (EEG). The goal is to develop robust classification rules for identifying epileptiform activity in the human brain. This paper presents advancement using the author’s proprietary developed spectral analysis to link power spectra of rat EEGs experiencing epilepsy seizures with the authors DFA algorithm and their MATLAB spectral analysis. We present a system that links 1) power spectra of seizures, in sleep, spike and seizure states with 2) Deterministic Finite Automata (DFA). Combining power spectra with DFA to correctly predict and identify epileptiform activity (spikes) and epileptic seizures opens the door to creating classifiers for seizures. It is a common goal for those skilled in the art of epilepsy prediction to create classifiers to make rules and discretize events leading to an epileptic seizure. Herein we present a means to link time and frequency domain components from MATLAB and proprietary software to clinical epileptiform activity.
RSCTC '08 Proceedings of the 6th International Conference on Rough Sets and Current Trends in Computing | 2008
Rory A. Lewis; Amanda Cohen; Wenxin Jiang; Zbigniew W. Raś
In the continuing investigation of identifying musical instruments in a polyphonic domain, we present a system that can identify an instrument in a polyphonic domain with added noise of numerous interacting and conflicting instruments in an orchestra. A hierarchical tree specifically designed for the breakdown of polyphonic sounds is used to enhance training of classifiers to correctly estimate an unknown polyphonic sound. This paper shows how goals to determine what hierarchical levels and what combination of mix levels is most effective has been achieved. Learning the correct instrument classification for creating noise together with what levels and mixed the noise optimizes training sets is crucial in the quest to discover instruments in noise. Herein we present a novel system that disseminates instruments in a polyphonic domain.
RSCTC'10 Proceedings of the 7th international conference on Rough sets and current trends in computing | 2010
Xin Zhang; Wenxin Jiang; Zbigniew W. Raś; Rory A. Lewis
Automatic indexing of music instruments for multi-timbre sounds is challenging, especially when partials from different sources are overlapping with each other. Temporal features, which have been successfully applied in monophonic sound timbre identification, failed to isolate music instrument in multi-timbre objects, since the detection of the start and end position of each music segment unit is very difficult. Spectral features of MPEG7 and other popular features provide economic computation but contain limited information about timbre. Being compared to the spectral features, spectrum signature features have less information loss; therefore may identify sound sources in multitimbre music objects with higher accuracy. However, the high dimensionality of spectrum signature feature set requires intensive computing and causes estimation efficiency problem. To overcome these problems, the authors developed a new multi-resolution system with an iterative spectrum band matching device to provide fast and accurate recognition.
international syposium on methodologies for intelligent systems | 2008
Rory A. Lewis; Wenxin Jiang; Zbigniew W. Raś
In the continuing investigation of the relationship between music and emotions it is recognized that MPEG-7 based MIR systems are the state-of-the-art. Also, it is known that non-temporal systems are diametrically unconducive to pitch analysis, an imperative for key and scalar analysis which determine emotions in music. Furthermore, even in a temporal MIR system one can only find the key if the scale is known or vice-versa, one can only find the scale if the key is known.We introduce a new MIRAI-based decision-support system that, given a blind database of music files, can successfully search for both the scale and the key of an unknown song in a music database and accordingly link each song to its set of scales and possible emotional states.