Hamid Eghbal-zadeh
Johannes Kepler University of Linz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hamid Eghbal-zadeh.
conference on recommender systems | 2017
Andreu Vall; Hamid Eghbal-zadeh; Matthias Dorfer; Markus Schedl; Gerhard Widmer
Automated music playlist generation is a specific form of music recommendation. Generally stated, the user receives a set of song suggestions defining a coherent listening session. We hypothesize that the best way to convey such playlist coherence to new recommendations is by learning it from actual curated examples, in contrast to imposing ad hoc constraints. Collaborative filtering methods can be used to capture underlying patterns in hand-curated playlists. However, the scarcity of thoroughly curated playlists and the bias towards popular songs result in the vast majority of songs occurring in very few playlists and thus being poorly recommended. To overcome this issue, we propose an alternative model based on a song-to-playlist classifier, which learns the underlying structure from actual playlists while leveraging song features derived from audio, social tags and independent listening logs. Experiments on two datasets of hand-curated playlists show competitive performance compared to collaborative filtering when sufficient training data is available and more robust performance when recommending rare and out-of-set songs. For example, both approaches achieve a recall@100 of roughly 35% for songs occurring in 5 or more training playists, whereas the proposed model achieves a recall@100 of roughly 15% for songs occurring in 4 or less training playlists, compared to the 3% achieved by collaborative filtering.
european signal processing conference | 2015
Hamid Eghbal-zadeh; Markus Schedl; Gerhard Widmer
Music artist (i.e., singer) recognition is a challenging task in Music Information Retrieval (MIR). The presence of different musical instruments, the diversity of music genres and singing techniques make the retrieval of artist-relevant information from a song difficult. Many authors tried to address this problem by using complex features or hybrid systems. In this paper, we propose new song-level timbre-related features that are built from frame-level MFCCs via so-called i-vectors. We report artist recognition results with multiple classifiers such as K-nearest neighbor, Discriminant Analysis and Naive Bayes using these new features. Our approach yields considerable improvements and outperforms existing methods. We could achieve an 84.31% accuracy using MFCC features on a 20-classes artist recognition task.
conference on recommender systems | 2018
Yashar Deldjoo; Mihai Gabriel Constantin; Hamid Eghbal-zadeh; Bogdan Ionescu; Markus Schedl; Paolo Cremonesi
We propose a multi-modal content-based movie recommender system that replaces human-generated metadata with content descriptions automatically extracted from the visual and audio channels of a video. Content descriptors improve over traditional metadata in terms of both richness (it is possible to extract hundreds of meaningful features covering various modalities) and quality (content features are consistent across different systems and immune to human errors). Our recommender system integrates state-of-the-art aesthetic and deep visual features as well as block-level and i-vector audio features. For fusing the different modalities, we propose a rank aggregation strategy extending the Borda count approach. We evaluate the proposed multi-modal recommender system comprehensively against metadata-based baselines. To this end, we conduct two empirical studies: (i) a system-centric study to measure the offline quality of recommendations in terms of accuracy-related and beyond-accuracy performance measures (novelty, diversity, and coverage), and (ii) a user-centric online experiment, measuring different subjective metrics, including relevance, satisfaction, and diversity. In both studies, we use a dataset of more than 4,000 movie trailers, which makes our approach versatile. Our results shed light on the accuracy and beyond-accuracy performance of audio, visual, and textual features in content-based movie recommender systems.
bioRxiv | 2018
Hamid Eghbal-zadeh; Lukas Fischer; Niko Popitsch; Florian Kromp; Sabine Taschner-Mandl; Khaled Koutini; Teresa Gerber; Eva Bozsaky; Peter F. Ambros; Inge M. Ambros; Gerhard Widmer; Bernhard A. Moser
Diagnosis and risk stratification of cancer and many other diseases require the detection of genomic breakpoints as a prerequisite of calling copy number alterations (CNA). This, however, is still challenging and requires time-consuming manual curation. As deep-learning methods outperformed classical state-of-the-art algorithms in various domains and have also been successfully applied to life science problems including medicine and biology, we here propose Deep SNP, a novel Deep Neural Network to learn from genomic data. Specifically, we used a manually curated dataset from 12 genomic single nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at predicting the presence or absence of genomic breakpoints, an indicator of structural chromosomal variations, in windows of 40,000 probes. We compare our results with well-known neural network models as well as Rawcopy though this tool is designed to predict breakpoints and in addition genomic segments with high sensitivity. We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models. Qualitative examples suggest that integration of a localization unit may enable breakpoint detection and prediction of genomic segments, even if the breakpoint coordinates were not provided for network training. These results warrant further evaluation of DeepSNP for breakpoint localization and subsequent calling of genomic segments.
international symposium/conference on music information retrieval | 2015
Hamid Eghbal-zadeh; Bernhard Lehner; Markus Schedl; Gerhard Widmer
IEEE Transactions on Affective Computing | 2017
Markus Schedl; Emilia Gómez; Erika Trent; Marko Tkalčič; Hamid Eghbal-zadeh; Agustín Martorell
european signal processing conference | 2017
Hamid Eghbal-zadeh; Bernhard Lehner; Matthias Dorfer; Gerhard Widmer
international symposium/conference on music information retrieval | 2016
Markus Schedl; Hamid Eghbal-zadeh; Emilia Gómez; Marko Tkalcic
arXiv: Learning | 2017
Hamid Eghbal-zadeh; Matthias Dorfer; Gerhard Widmer
arXiv: Learning | 2017
Hamid Eghbal-zadeh; Gerhard Widmer