Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tiago H. Falk is active.

Publication


Featured researches published by Tiago H. Falk.


Journal of Neural Engineering | 2014

Performance measurement for brain–computer or brain–machine interfaces: a tutorial

David E. Thompson; Lucia Rita Quitadamo; Luca T. Mainardi; Khalil ur Rehman Laghari; Shangkai Gao; Pieter-Jan Kindermans; John D. Simeral; Reza Fazel-Rezai; Matteo Matteucci; Tiago H. Falk; Luigi Bianchi; Cynthia A. Chestek; Jane E. Huggins

OBJECTIVEnBrain-computer interfaces (BCIs) have the potential to be valuable clinical tools. However, the varied nature of BCIs, combined with the large number of laboratories participating in BCI research, makes uniform performance reporting difficult. To address this situation, we present a tutorial on performance measurement in BCI research.nnnAPPROACHnA workshop on this topic was held at the 2013 International BCI Meeting at Asilomar Conference Center in Pacific Grove, California. This paper contains the consensus opinion of the workshop members, refined through discussion in the following months and the input of authors who were unable to attend the workshop.nnnMAIN RESULTSnChecklists for methods reporting were developed for both discrete and continuous BCIs. Relevant metrics are reviewed for different types of BCI research, with notes on their use to encourage uniform application between laboratories.nnnSIGNIFICANCEnGraduate students and other researchers new to BCI research may find this tutorial a helpful introduction to performance measurement in the field.


Neurocomputing | 2016

Relevance vector classifier decision fusion and EEG graph-theoretic features for automatic affective state characterization

Rishabh Gupta; Khalil ur Rehman Laghari; Tiago H. Falk

Objective characterization of affective states during music clip watching could lead to disruptive new technologies, such as affective braincomputer interfaces, neuromarketing tools, and affective video tagging systems, to name a few. To date, the majority of existing systems have been developed based on analyzing electroencephalography (EEG) patterns in specific brain regions. With music videos, however, a complex interplay of information transfer exists between various brain regions. In this paper, we propose the use of EEG graph-theoretic analysis to characterize three emotional ratings: valence, arousal, and dominance, as well as the liking subjective rating. For characterization, graph-theoretic features were used to classify emotional states through support vector machine (SVM) and relevance vector machine (RVM) classifiers. Moreover, fusion schemes at feature and decision levels were also used to improve classification performance. In general, our study shows that the EEG graph-theoretic features are better suited for emotion classification than traditionally used EEG features such as, spectral power features (SPF) and asymmetry index (AI) features. The percentage increase in classification performance, represented by F1-scores, obtained using the proposed methodologies relative to the traditionally used SPF and AI features ranged from: Valence (79%), Arousal (38%), Dominance (56%) and Liking (47%). These findings suggest that an EEG graph-theoretical approach along with a robust classifier can better characterize human affective states evoked during music clip watching.


international workshop on acoustic signal enhancement | 2014

An improved non-intrusive intelligibility metric for noisy and reverberant speech

João Felipe Santos; Mohammed Senoussaoui; Tiago H. Falk

Non-intrusive speech intelligibility metrics are based solely on the corrupted speech information and a prior model of the speech signal in a given representation. As such, any sources of variability not taken into account by the model will affect the metrics per-formance. In this paper, we investigate two sources of variability in the auditory-inspired model used by the speech-to-reverberation modulation energy ratio (SRMR) metric, namely speech content and pitch, and propose two updates that aim to reduce the variability caused by these sources. First, we limited the dynamic range of the energies in the modulation spectrum bands in order to reduce the effect of speech content and speaker variability. Second, the range of the modulation filter bank was modified to reduce the variability due to pitch. Experimental results show that the updated metric presents higher performance and lower variability relative to the original SRMR when assessing speech intelligibility in noisy and reverberant environments, as well as outperforms several standard intrusive and non-intrusive benchmark metrics.


international ieee/embs conference on neural engineering | 2013

The effects of text-to-speech system quality on emotional states and frontal alpha band power

Sebastian Arndt; Jan-Niklas Antons; Rishabh Gupta; Khalil ur Rehman Laghari; Robert Schleicher; Sebastian Möller; Tiago H. Falk

The tolerance limit for acceptable multimedia quality is changing as more and more high quality services approach the market. Thus, negative emotional reactions towards low quality services may cause user disappointment and are likely to increase churn rate. The current study analyzes how different levels of synthetic speech quality, obtained from different text-to-speech (TTS) systems, affect the emotional response of a user. This is achieved using two methods: subjective, by means of user reports; and neurophysiological by means of electroencephalography (EEG) analysis. More specifically, we analyzed the frontal alpha band power and correlated this with the subjective ratings based on the Self-Assessment Manikin scale. We found an increase in neuronal activity in the left frontal area with decreasing quality and argue that this is due to user disappointment with low quality TTS systems as they become harder to understand.


international conference on acoustics, speech, and signal processing | 2013

Towards an EEG-based biomarker for Alzheimer's disease: Improving amplitude modulation analysis features

Francisco J. Fraga; Tiago H. Falk; Lucas R. Trambaiolli; Eliezyer F. Oliveira; Walter H. L. Pinaya; Paulo Afonso Medeiros Kanda; Renato Anghinah

In this paper, an EEG-based biomarker for automated Alzheimers disease (AD) diagnosis is described, based on extending a recently-proposed “percentage modulation energy” (PME) metric. More specifically, to improve the signal-to-noise ratio of the EEG signal, PME features were averaged over different durations prior to classification. Additionally, two variants of the PME features were developed: the “percentage raw energy” (PRE) and the “percentage envelope energy” (PEE). Experimental results on a dataset of 88 participants (35 controls, 31 with mild-AD and 22 with moderate AD) show that over 98% accuracy can be achieved with a support vector classifier when discriminating between healthy and mild AD patients, thus significantly outperforming the original PME biomarker. Moreover, the proposed system can achieve over 94% accuracy when discriminating between mild and moderate AD, thus opening doors for very early diagnosis.


IEEE Signal Processing Letters | 2013

Whispered Speech Detection in Noise Using Auditory-Inspired Modulation Spectrum Features

Milton Sarria-Paja; Tiago H. Falk

Robustness to ambient noise, varying vocal effort, and availability of only short-duration test utterances represent big challenges for developers of automated speech-enabled applications. Recent studies have proposed the use of vocal effort-matched speaker models as a potential solution to such challenges. However, detecting whispered speech in extremely noisy environments is not a trivial task. This letter proposes the use of auditory-inspired modulation spectral-based features as a method of separating speech from environment-based components, thus resulting in accurate whispered speech detection at signal-to-noise ratios as low as 0 dB. Experimental results show the proposed detection algorithm outperforming two benchmark approaches.


international conference on acoustics, speech, and signal processing | 2013

Very early detection of Autism Spectrum Disorders based on acoustic analysis of pre-verbal vocalizations of 18-month old toddlers

João Felipe Santos; Nirit Brosh; Tiago H. Falk; Lonnie Zwaigenbaum; Susan E. Bryson; Wendy Roberts; Isabel M. Smith; Peter Szatmari; Jessica Brian

With the increasing prevalence of Autism Spectrum Disorders (ASD), very early detection has become a key priority research topic, as early interventions can increase the chances of success. Since atypical communication is a hallmark of ASD, automated acoustic-prosodic analyses have received prominent attention. Existing studies, however, have focused on verbal children, typically over the age of three (when many children may be reliably diagnosed) and as high as early teens. Here, an acoustic-prosodic analysis of pre-verbal vocalizations (e.g., babbles, cries) of 18-month old toddlers is performed. Data was obtained from a prospective longitudinal study looking at high-risk siblings of children with ASD who were also diagnosed with ASD, as well as low-risk age-matched typically developing controls. Several acoustic-prosodic features were extracted and used to train support vector machine and probabilistic neural network classifiers; classification accuracy as high as 97% was obtained. Our findings suggest that markers of autism may be present in pre-verbal vocalizations of 18-month old toddlers, thus may be used to assist clinicians with very early detection of ASD.


international conference on acoustics, speech, and signal processing | 2013

Whispered speaker verification and gender detection using weighted instantaneous frequencies

Milton Sarria-Paja; Tiago H. Falk; Douglas D. O'Shaughnessy

In this paper, automatic speaker verification and gender detection using whispered speech is explored. Whispered speech, despite its reduced perceptibility, has been shown to convey relevant speaker identity and gender information. This study compares the performance of a GMM-UBM speaker verification system trained with normal and whispered speech under different matched and mismatched conditions, and describes the benefits of adaptation in a speaking-style independent model to handle both vocal efforts. It is shown that performance improvements can be achieved by using speaking-style and gender dependent models, as well as by adding features based on the AM-FM signal representation. Moreover, the AM-FM based features showed to be more discriminative than classical MFCCs for whispered speech gender detection. Experimental results suggest that whispered speech carries sufficient information for reliable automatic speaker identification.


Human-centric Computing and Information Sciences | 2016

Using affective brain-computer interfaces to characterize human influential factors for speech quality-of-experience perception modelling

Rishabh Gupta; Khalil Laghari; Hubert J. Banville; Tiago H. Falk

As new speech technologies emerge, telecommunication service providers have to provide superior user experience in order to remain competitive. To this end, quality-of-experience (QoE) perception modelling and measurement has become a key priority. QoE models rely on three influence factors: technological, contextual and human. Existing solutions have typically relied on the former two and human influence factors (HIFs) have been mostly neglected due to difficulty in measuring them. In this paper, we show that measuring human affective states is important for QoE measurement and propose the use of affective brain-computer interfaces (aBCIs) for objective measurement of perceived QoE for two emerging speech technologies, namely far-field hands-free communications and text-to-speech systems. When incorporating subjectively-derived HIFs into the QoE model, gains of up to 26.3xa0% could be found relative to utilizing only technological factors. When utilizing HIFs derived from an electroencephalography (EEG) based aBCI, in turn, gains of up to 14.5xa0% were observed. These findings show the importance of using aBCIs in QoE measurement and also highlight that further improvement may be warranted once improved affective state correlates are found from EEGs and/or other neurophysiological modalities.


workshop on applications of signal processing to audio and acoustics | 2015

PhySyQX: A database for physiological evaluation of synthesised speech quality-of-experience

Rishabh Gupta; Hubert J. Banville; Tiago H. Falk

A products success in the market can be predicted based on the Quality-of-Experience (QoE) it offers to its users. With the burgeoning market for text-to-speech (TTS) systems, it has become extremely important to characterise new TTS systems in terms of their QoE. To this end, many objective models for quality estimation have been developed. These state-of-the art models are developed considering the system and contextual factors which influence the users experience. Such models generally lack inputs from human factors, as these are not directly observable and are manifested inside users brains. Therefore, in this study a multi-modal database was developed for neuro-physiological identification of the human factors which influence user perceived QoE and also to probe into the users internal quality formation processes. It is hoped that the database will help improve the pre-existing models for quality estimation. The database utilizes neuro-physiological tools, such as electroencephalography and functional near infrared spectroscopy, to record users brain activity while experiencing synthesised speech produced from various commercially available TTS systems. Moreover, an extensive analysis of participants ratings has been reported in the paper. Also, the database has been made publicly available online to encourage other researchers to utilize the neuro-physiological insights while developing new quality estimation algorithms.

Collaboration


Dive into the Tiago H. Falk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Arndt

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hubert J. Banville

Institut national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Möller

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Francisco J. Fraga

Universidade Federal do ABC

View shared research outputs
Researchain Logo
Decentralizing Knowledge