Thibault Langlois
University of Lisbon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thibault Langlois.
Neurocomputing | 2006
André O. Falcão; Thibault Langlois; Andreas Wichert
Abstract In this paper we propose a novel approach for modeling kernels in Radial Basis Function networks. The method provides an extra degree of flexibility to the kernel structure. This flexibility comes through the use of modifier functions applied to the distance computation procedure, essential for all kernel evaluations. Initially the classifier uses an unsupervised method to construct the network topology, where most parameters of the network are defined without any customization from the user. During the second phase only one parameter per kernel is estimated. Experimental evidence on four datasets shows that the algorithm is robust and competitive.
Journal of New Music Research | 2011
Gonçalo Marques; Thibault Langlois; Fabien Gouyon; Miguel Lopes; Mohamed Sordo
Abstract In music genre classification, most approaches rely on statistical characteristics of low-level features computed on short audio frames. In these methods, it is implicitly considered that frames carry equally relevant information loads and that either individual frames, or distributions thereof, somehow capture the specificities of each genre. In this paper we study the representation space defined by short-term audio features with respect to class boundaries, and compare different processing techniques to partition this space. These partitions are evaluated in terms of accuracy on two genre classification tasks, with several types of classifiers. Experiments show that a randomized and unsupervised partition of the space, used in conjunction with a Markov Model classifier lead to accuracies comparable to the state of the art. We also show that unsupervised partitions of the space tend to create less hubs.
advances in multimedia | 2009
Thibault Langlois; Gonçalo Marques
Automatic music genre classification has received a lot of attention from the Music Information Retrieval (MIR) community in the past years. Systems capable of discriminating music genres are essential for managing music databases. This paper presents a method for music genre classification based solely on the audio contents of the signal. The method relies on a language modeling approach and takes in account the temporal information of the music signals for genre classification. First, the music data is transformed into a sequence of symbols, and a model is derived for each genre by estimating n-grams from the training data. As a term o comparison, HMMs models for each musical genre were also implemented. Tests on different audio sets show that the proposed approach performs very well, and outperforms HMMs based methods.
Proceedings of the 14th International Academic MindTrek Conference on Envisioning Future Media Environments | 2010
Thibault Langlois; Teresa Chambel; Eva Oliveira; Paula Carvalho; Gonçalo Marques; André O. Falcão
Video is a very rich medium that is becoming increasingly dominant. A massive amount of video information is available, but very difficult to access if not adequately indexed: a challenging task to accomplish. We describe a Video Information Retrieval system, under development, that operates on a database composed of subtitled documents. The simultaneous analysis of video, subtitles and audio streams is performed in order to index, visualize and retrieve excerpts of video documents that share a certain emotional or semantic property.
Proceeding of the 16th International Academic MindTrek Conference on | 2012
Nuno Gil; Nuno Silva; Eduardo Duarte; Pedro Martins; Thibault Langlois; Teresa Chambel
Movies are one of the biggest sources of entertainment, in individual and social contexts, and increasingly accessible as enormous collections of videos and movies over the Internet, in social media and interactive TV. These richer environments demand for new and more powerful ways to search, browse and view videos and movies, that may benefit from video content-based analysis and classification techniques. In this paper, we present and evaluate extended features of content processing, search, overview and browsing in the MovieClouds, from overview clouds at the movies space down to the movies, based on the information conveyed in the different tracks or perspectives of its content, especially audio and subtitles where most of the semantics is expressed. Tag clouds are adopted as a unifying-paradigm, complemented with other approaches, to extend to movies the power, flexibility, engagement and fun usually associated with clouds, in a consistent way. Evaluation results were very encouraging, reinforcing the previous approach and reflecting the improvements and new features.
european conference on interactive tv | 2013
Jorge M. A. Gomes; Teresa Chambel; Thibault Langlois
Movies and games are amongst the biggest sources of entertainment, in individual and social contexts. Increasingly, movies and videos are becoming accessible as enormous collections over the Internet, in social media and interactive TV, demanding for new and more powerful ways to search, browse and view them, that benefit from video content-based analysis and classification techniques. Game elements, in turn, can help in this often challenging process, e.g. in the audio, to obtain user feedback to improve the efficacy of classification, while maintaining or improving the entertaining quality of the user experience. In this paper, we present and discuss SoundsLike, a gamification approach to engage users in movies soundtrack labeling, based on relevance feedback and integrated in MovieClouds, an interactive web application designed to access, explore and visualize movies based on the information conveyed in the different tracks or perspectives of its content, especially audio and subtitles where most of the semantics is conveyed, and with a special focus on the emotional dimensions expressed in the movies or felt by the viewers.
Proceedings of the 15th International Academic MindTrek Conference on Envisioning Future Media Environments | 2011
Pedro Martins; Thibault Langlois; Teresa Chambel
Movies are one of the biggest sources of entertainment, in individual and social contexts. By combining diverse symbol systems, such as images, texts, music and narration to tell stories, they often engage the viewers perceptually, cognitively and emotionally. Advances in digitalization and networking are enabling the access to enormous collections of videos and movies over the Internet, in social media, and through video on demand services on iTV. The development of video content-based analysis and classification techniques is also allowing the access to more information about or contained in the movies, demanding for new ways to search, browse and view videos and movies in this scenario. In this paper, we present and evaluate MovieClouds, an interactive web application designed to access, explore and visualize movies based on the information conveyed in the different tracks or perspectives of its content, especially audio and subtitles where most of the semantics is expressed, and with a special focus on the emotional dimensions expressed in the movies or felt by the viewers. For the overview, analysis, and exploratory browsing of the movies collection and the individual movies, it adopts a tag cloud unifying-paradigm, that gained popularity in Web 2.0, with the aim to extend to movies the power, flexibility, engagement and fun usually associated with clouds.
International Journal of Advanced Media and Communication | 2013
Teresa Chambel; Thibault Langlois; Pedro Martins; Nuno Gil; Nuno Silva; Eduardo Duarte
Videos, and especially movies, often engage viewers perceptually, cognitively and emotionally, by combining diverse symbol systems, such as images, texts, music and narration to tell stories. As one of the biggest sources of entertainment, in individual and social contexts, they are increasingly accessible as enormous collections over the internet, in social media and iTV. These richer environments demand for new and more powerful ways to search, browse and view videos that may benefit from video content-based analysis and classification techniques. In this paper, we describe and evaluate MovieClouds, its core and extended features of content processing, interactive search, overview and browsing designed to access, explore and visualise movies, from overview clouds, at the movies space down to the movies, based on the information conveyed in their different content tracks. It adopts a tag cloud unifying-paradigm, to extend to movies the power, flexibility, engagement and fun usually associated with clouds.
intelligent robots and systems | 2014
João Lobato Oliveira; Keisuke Nakamura; Thibault Langlois; Fabien Gouyon; Kazuhiro Nakadai; Angelica Lim; Luís Paulo Reis; Hiroshi G. Okuno
In this paper we address the problem of musical genre recognition for a dancing robot with embedded microphones capable of distinguishing the genre of a musical piece while moving in a real-world scenario. For this purpose, we assess and compare two state-of-the-art musical genre recognition systems, based on Support Vector Machines and Markov Models, in the context of different real-world acoustic environments. In addition, we compare different preprocessing robot audition variants (single channel and separated signal from multiple channels) and test different acoustic models, learned a priori, to tackle multiple noise conditions of increasing complexity in the presence of noises of different natures (e.g., robot motion, speech). The results with six different musical genres suggest improved results, in the order of 43.6pp for the most complex conditions, when recurring to Sound Source Separation and acoustic models trained in similar conditions to the testing scenarios. A robot dance demonstration session confirms the applicability of the proposed integration for genre-adaptive dancing robots in real-world noisy environments.
european conference on machine learning | 2003
Pedro F. Campos; Thibault Langlois
This paper presents Abalearn, a self-teaching Abalone program capable of automatically reaching an intermediate level of play without needing expert-labeled training examples, deep searches or exposure to competent play. Our approach is based on a reinforcement learning algorithm that is risk-seeking, since defensive players in Abalone tend to never end a game. We show that it is the risk-sensitivity that allows a successful self-play training. We also propose a set of features that seem relevant for achieving a good level of play. We evaluate our approach using a fixed heuristic opponent as a benchmark, pitting our agents against human players online and comparing samples of our agents at different times of training.