Elias Pampalk
Austrian Research Institute for Artificial Intelligence
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elias Pampalk.
acm multimedia | 2002
Elias Pampalk; Andreas Rauber; Dieter Merkl
With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.
Computer Music Journal | 2004
Elias Pampalk; Simon Dixon; Gerhard Widmer
The availability of large music collections calls for ways to efficiently access and explore them. We present a new approach which combines descriptors derived from audio analysis with meta-information to create different views of a collection. Such views can have a focus on timbre, rhythm, artist, style or other aspects of music. For each view the pieces of music are organized on a map in such a way that similar pieces are located close to each other. The maps are visualized using an Islands of Music metaphor where islands represent groups of similar pieces. The maps are linked to each other using a new technique to align self-organizing maps. The user is able to browse the collection and explore different aspects by gradually changing focus from one view to another. We demonstrate our approach on a small collection using a meta-information-based view and two views generated from audio analysis, namely, beat periodicity as an aspect of rhythm and spectral information as an aspect of timbre.
international conference on artificial neural networks | 2002
Elias Pampalk; Andreas Rauber; Dieter Merkl
Several methods to visualize clusters in high-dimensional data sets using the Self-Organizing Map (SOM) have been proposed. However, most of these methods only focus on the information extracted from the model vectors of the SOM. This paper introduces a novel method to visualize the clusters of a SOM based on smoothed data histograms. The method is illustrated using a simple 2-dimensional data set and similarities to other SOM based visualizations and to the posterior probability distribution of the Generative Topographic Mapping are discussed. Furthermore, the method is evaluated on a real world data set consisting of pieces of music.
Journal of New Music Research | 2003
Andreas Rauber; Elias Pampalk; Dieter Merkl
The availability of large music repositories calls for new ways of automatically organizing and accessing them. While artist-based listings or title indexes may help in locating a specific piece of music, a more intuitive, genre-based organization is required to allow users to browse an archive and explore its contents. So far, however, these organizations following musical styles have to be designed manually. With the SOM-enhanced JukeBox (SOMeJB) we propose an approach to automatically create an organization of music archives following their perceived sound similarity. More specifically, characteristics of frequency spectra are extracted and transformed according to psychoacoustic models. The resulting psychoacoustic Rhythm Patterns are further organized using the Growing Hierarchical Self-Organizing Map, an unsupervised neural network. On top of this advanced visualizations including Islands of Music (IoM) and Weather Charts offer an interface for interactive exploration of large music repositories.
Ai Magazine | 2003
Gerhard Widmer; Simon Dixon; Werner Goebl; Elias Pampalk; Asmir Tobudic
The article introduces the reader to a large interdisciplinary research project whose goal is to use AI to gain new insight into a complex artistic phenomenon. We study fundamental principles of expressive music performance by measuring performance aspects in large numbers of recordings by highly skilled musicians (concert pianists) and analyzing the data with state-of-the-art methods from areas such as machine learning, data mining, and data visualization. The article first introduces the general research questions that guide the project and then summarizes some of the most important results achieved to date, with an emphasis on the most recent and still rather speculative work. A broad view of the discovery process is given, from data acquisition through data visualization to inductive model building and pattern discovery, and it turns out that AI plays an important role in all stages of such an ambitious enterprise. Our current results show that it is possible for machines to make novel and interesting discoveries even in a domain such as music and that even if we might never find the Horowitz Factor, AI can give us completely new insights into complex artistic behavior.
european conference on research and advanced technology for digital libraries | 2005
Elias Pampalk; Arthur Flexer; Gerhard Widmer
As digital music collections grow, so does the need to organizing them automatically. In this paper we present an approach to hierarchically organize music collections at the artist level. Artists are grouped according to similarity which is computed using a web search engine and standard text retrieval techniques. The groups are described by words found on the webpages using term selection techniques and domain knowledge. We compare different term selection techniques, present a simple demonstration, and discuss our findings.
IEEE Transactions on Multimedia | 2007
Tim Pohle; Peter Knees; Markus Schedl; Elias Pampalk; Gerhard Widmer
We present a novel interface to (portable) music players that benefit from intelligently structured collections of audio files. For structuring, we calculate similarities between every pair of songs and model a travelling salesman problem (TSP) that is solved to obtain a playlist (i.e., the track ordering during playback) where the average distance between consecutive pieces of music is minimal according to the similarity measure. The similarities are determined using both audio signal analysis of the music tracks and Web-based artist profile comparison. Indeed, we show how to enhance the quality of the well-established methods based on audio signal processing with features derived from Web pages of music artists. Using TSP allows for creating circular playlists that can be easily browsed with a wheel as input device. We investigate the usefulness of four different TSP algorithms for this purpose. For evaluating the quality of the generated playlists, we apply a number of quality measures to two real-world music collections. It turns out that the proposed combination of audio and text-based similarity yields better results than the initial approach based on audio data only. We implemented an audio player as Java applet to demonstrate the benefits of our approach. Furthermore, we present the results of a small user study conducted to evaluate the quality of the generated playlists
Journal of New Music Research | 2008
Jean-Julien Aucouturier; Elias Pampalk
Surprise: This didn’t quite happen as expected. Most of the information is annotated manually (no automated analysis), unstructured (no taxonomy), in a collaborative, dynamical and unmoderated process (unlike a centralized library). Millions of users routinely connect to web-sites such as last.fm, MusicStrands, MusicBrainz or Pandora, where they enter free descriptions (aka tags) of the music they like or dislike. Each user’s tags are available for all to see and influence the way other users describe or look for music. The result is a collaborative repository of musical knowledge of a size and richness unheard of so far. ‘‘The Beatles’’ used to be ‘‘British pop’’. What they are now is something akin to Figure 1.
Computer Music Journal | 2006
Matthew Cooper; Jonathan Foote; Elias Pampalk; George Tzanetakis
Pampalk, and George Tzanetakis *FX Palo Alto Laboratory 3400 Hillview Avenue, Building 4 Palo Alto, California 94304 USA {foote, cooper}@fxpal.com †Austrian Research Institute for Artificial Intelligence (OFAI) Freyung 6/6 A-1010 Vienna, Austria [email protected] ††Department of Computer Science (also Music) University of Victoria PO Box 3055, STN CSC Victoria, British Columbia V8W 3P6 Canada [email protected]
knowledge discovery and data mining | 2003
Elias Pampalk; Werner Goebl; Gerhard Widmer
Using visualization techniques to explore and understand high-dimensional data is an efficient way to combine human intelligence with the immense brute force computation power available nowadays. Several visualization techniques have been developed to study the cluster structure of data, i.e., the existence of distinctive groups in the data and how these clusters are related to each other. However, only few of these techniques lend themselves to studying how this structure changes if the features describing the data are changed. Understanding this relationship between the features and the cluster structure means understanding the features themselves and is thus a useful tool in the feature extraction phase.In this paper we present a novel approach to visualizing how modification of the features with respect to weighting or normalization changes the cluster structure. We demonstrate the application of our approach in two music related data mining projects.
Collaboration
Dive into the Elias Pampalk's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputs