Alicja Wieczorkowska
University of North Carolina at Charlotte
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alicja Wieczorkowska.
european conference on principles of data mining and knowledge discovery | 2000
Zbigniew W. Ras; Alicja Wieczorkowska
Decision tables classifying customers into groups of different profitability are used for mining rules classifying customers. Attributes are divided into two groups: stable and flexible. By stable attributes we mean attributes which values can not be changed by a bank (age, marital status, number of children are the examples). On the other hand attributes (like percentage rate or loan approval to buy a house in certain area) which values can be changed or influenced by a bank are called flexible. Rules are extracted from a decision table given preference to flexible attributes. This new class of rules forms a special repository of rules from which new rules called actionrules are constructed. They show what actions should be taken to improve the profitability of customers.
intelligent information systems | 2006
Alicja Wieczorkowska; Piotr Synak; Zbigniew W. Raś
This paper addresses the problem of multi-label classification of emotions in musical recordings. The testing data set contains 875 samples (30 seconds each). The samples were manually labelled into 13 classes, without limits regarding the number of labels for each sample. The experiments and test results are presented.
international syposium on methodologies for intelligent systems | 2005
Alicja Wieczorkowska; Piotr Synak; Rory A. Lewis; Zbigniew W. Raś
Music is not only a set of sounds, it evokes emotions, subjectively perceived by listeners. The growing amount of audio data available on CDs and in the Internet wakes up a need for content-based searching through these files. The user may be interested in finding pieces in a specific mood. The goal of this paper is to elaborate tools for such a search. A method for the appropriate objective description (parameterization) of audio files is proposed, and experiments on a set of music pieces are described. The results are summarized in concluding chapter.
intelligent information systems | 2003
Alicja Wieczorkowska; Jakub Wroblewski; Piotr Synak; Dominik Ślȩzak
An automatic content extraction from multimedia files is recently being extensively explored. However, an automatic content description of musical sounds has not been broadly investigated and still needs an intensive research. In this paper, we investigate how to optimize sound representation in terms of musical instrument recognition purposes. We propose to trace trends in the evolution of values of MPEG-7 descriptors in time, as well as their combinations. Described process is a typical example of KDD application, consisting of data preparation, feature extraction and decision model construction. Discussion of efficiency of applied classifiers illustrates capabilities of possible progress in the optimization of sound representation. We believe that further research in this area would provide background for an automatic multimedia content description.
Advances in Music Information Retrieval | 2010
Zbigniew W. Ras; Alicja Wieczorkowska
Sound waves propagate through various media, and allow communication or entertainment for us, humans. Music we hear or create can be perceived in such aspects as rhythm, melody, harmony, timbre, or mood. All these elements of music can be of interest for users of music information retrieval systems. Since vast music repositories are available for everyone in everyday use (both in private collections, and in the Internet), it is desirable and becomes necessary to browse music collections by contents. Therefore, music information retrieval can be potentially of interest for every user of computers and the Internet. There is a lot of research performed in music information retrieval domain, and the outcomes, as well as trends in this research, are certainly worth popularizing. This idea motivated us to prepare the book on Advances in Music Information Retrieval. It is divided into four sections: MIR Methods and Platforms, Harmony, Music Similarity, and Content Based Identification and Retrieval. Glossary of basic terms is given at the end of the book, to familiarize readers with vocabulary referring to music information retrieval.
international syposium on methodologies for intelligent systems | 1999
Alicja Wieczorkowska
An application of RS knowledge discovery methods for automatic classification of musical instrument sounds is presented. Also, we provide basic information on acoustics of musical instruments. Since the digital record of sound contains a huge amount of data, the redundancy in the data is fixed via parameterization. The parameters extracted from sounds of musical instrument are discussed. We use quantization as a preprocessing for knowledge discovery to limit the number of parameter values. Next we show exemplary methods of quantization of parameter values. Finally, experiments concerning audio signal classification using rough set approach are presented and the results are discussed.
Advances in Music Information Retrieval | 2010
Alicja Wieczorkowska; Ashoke Kumar Datta; Ranjan Sengupta; Nityananda Dey; Bhaswati Mukherjee
Emotions give meaning to our lives. No aspect of our mental life is more important to the quality and meaning of our existence than emotions. Theymake life worth living, or sometimes ending. The English word ‘emotion’ is derived from the French word mouvoir which means ‘move’. Great classical philosophers-Plato, Aristotle, Spinoza, Descartes conceived emotion as responses to certain sorts of events triggering bodily changes and typically motivating characteristic behavior. It is difficult to find a consensus on the definition of emotion [9].
international syposium on methodologies for intelligent systems | 2009
Miron B. Kursa; Witold R. Rudnicki; Alicja Wieczorkowska; Elżbieta Kubera; Agnieszka Kubik-Komar
This paper describes automatic classification of predominant musical instrument in sound mixes, using random forests as classifiers. The description of sound parameterization applied and methodology of random forest classification are given in the paper. Additionally, the significance of sound parameters used as conditional attributes is investigated. The results show that almost all sound attributes are informative, and random forest technique yields much higher classification results than support vector machines, used in previous research on these data.
foundations of computational intelligence | 2009
Wenxin Jiang; Alicja Wieczorkowska; Zbigniew W. Raś
Recognition and separation of sounds played by various instruments is very useful in labeling audio files with semantic information. This is a non-trivial task requiring sound analysis, but the results can aid automatic indexing and browsing music data when searching for melodies played by user specified instruments. In this chapter, we describe all stages of this process, including sound parameterization, instrument identification, and also separation of layered sounds. Parameterization in our case represents power amplitude spectrum, but we also perform comparative experiments with parameterization based mainly on spectrum related sound attributes, including MFCC, parameters describing the shape of the power spectrum of the sound waveform, and also time domain related parameters. Various classification algorithms have been applied, including k-nearest neighbor (KNN) yielding good results. The experiments on polyphonic (polytimbral) recordings and results discussed in this chapter allow us to draw conclusions regarding the directions of further experiments on this subject, which can be of interest for any user of music audio data sets.
multimedia and ubiquitous engineering | 2007
Alicja Wieczorkowska; Zbigniew W. Ras; Xin Zhang; Rory A. Lewis
Musical instrument sounds can be classified in various ways, depending on the instrument or articulation classification. This paper presents a number of possible generalizations of musical instruments sounds classification which can be used to construct different hierarchical decision attributes. Each decision attribute will lead us to a new classifier and the same to a different system for automatic indexing of music by instrument sounds and their generalizations. Values of a decision attribute and their generalizations are used to construct atomic queries of a query language built for retrieving musical objects from MIR database (see http://www.mir.uncc.edu). When query fails, the cooperative strategy will try to find its lowest generalization which does not fail, taking into consideration all available hierarchical attributes. Thus, the music object representing most similar object in the database is returned as the query answer. This paper evaluates only two hierarchical attributes upon the same dataset which contains 2628 distinct musical samples of 102 instruments from McGill University Master Samples (MUMS) CD collection.