Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hamidreza Chinaei is active.

Publication


Featured researches published by Hamidreza Chinaei.


international conference on acoustics, speech, and signal processing | 2017

Multi-view representation learning via gcca for multimodal analysis of Parkinson's disease

Juan Camilo Vásquez-Correa; Juan Rafael Orozco-Arroyave; Raman Arora; Elmar Nöth; Najim Dehak; Heidi Christensen; Frank Rudzicz; Tobias Bocklet; Milos Cernak; Hamidreza Chinaei; Julius Hannink; Phani Sankar Nidadavolu; Maria Yancheva; Alyssa Vann; Nikolai Vogler

Information from different bio-signals such as speech, handwriting, and gait have been used to monitor the state of Parkinsons disease (PD) patients, however, all the multimodal bio-signals may not always be available. We propose a method based on multi-view representation learning via generalized canonical correlation analysis (GCCA) for learning a representation of features extracted from handwriting and gait that can be used as a complement to speech-based features. Three different problems are addressed: classification of PD patients vs. healthy controls, prediction of the neurological state of PD patients according to the UPDRS score, and the prediction of a modified version of the Frenchay dysarthria assessment (m-FDA). According to the results, the proposed approach is suitable to improve the results in the addressed problems, specially in the prediction of the UPDRS, and m-FDA scores.


Digital Signal Processing | 2017

NeuroSpeech: An open-source software for Parkinson's speech analysis

Juan Rafael Orozco-Arroyave; Juan Camilo Vásquez-Correa; J. F. Vargas-Bonilla; Raman Arora; Najim Dehak; Phani Sankar Nidadavolu; Heidi Christensen; Frank Rudzicz; Maria Yancheva; Hamidreza Chinaei; Alyssa Vann; Nikolai Vogler; Tobias Bocklet; Milos Cernak; Julius Hannink; Elmar Nöth

A new software for modeling pathological speech signals is presented in this paper. The software is called NeuroSpeech. This software enables the analysis of pathological speech signals considering different speech dimensions: phonation, articulation, prosody, and intelligibility. All the methods considered in the software have been validated in previous experiments and publications. The current version of NeuroSpeech was developed to model dysarthric speech signals from people with Parkinsons disease; however, the structure of the software allows other computer scientists or developers to include other pathologies and/or other measures in order to complement the existing options. Three different tasks can be performed with the current version of the software: (1) the modeling of the speech recordings considering the aforementioned speech dimensions, (2) the automatic discrimination of Parkinsons vs. non-Parkinsons speech signals (if the user has access to recordings of other pathologies, he/she can re-train the system to perform the detection of other diseases), and (3) the prediction of the neurological state of the patient according to the Unified Parkinsons Disease Rating Scale (UPDRS) score. The prediction of the dysarthria level according to the Frenchay Dysarthria Assessment scale is also provided (the user can also train the system to perform the prediction of other kind of scales or degrees of severity). To the best of our knowledge, this is the first software with the characteristics described above, and we consider that it will help other researchers to contribute to the state-of-the-art in pathological speech assessment from different perspectives, e.g., from the clinical point of view for interpretation, and from the computer science point of view enabling the test of different measures and pattern recognition techniques.


international conference on acoustics, speech, and signal processing | 2017

On the impact of non-modal phonation on phonological features

Milos Cernak; Elmar Nöth; Frank Rudzicz; Heidi Christensen; Juan Rafael Orozco-Arroyave; Raman Arora; Tobias Bocklet; Hamidreza Chinaei; Julius Hannink; Phani Sankar Nidadavolu; Juan Camilo Vasquez; Maria Yancheva; Alyssa Vann; Nikolai Vogler

Different modes of vibration of the vocal folds contribute significantly to the voice quality. The neutral mode phonation, often used in a modal voice, is one against which the other modes can be contrastively described, also called non-modal phonations. This paper investigates the impact of non-modal phonation on phonological posteriors, the probabilities of phonological features inferred from the speech signal using a deep learning approach. Five different non-modal phonations are considered: falsetto, creaky, harshness, tense and breathiness. The impact of such non-modal phonation on phonological features, the Sound Patterns of English (SPE), is investigated in both speech analysis and synthesis tasks. We found that breathy and tense phonation impact the SPE features less, creaky phonation impacts the features moderately, and harsh and falsetto phonation impact the phonological features the most. We also report invariant and the most different SPE features impacted by non-modal phonation.


text speech and dialogue | 2014

A Topic Model Scoring Approach for Personalized QA Systems

Hamidreza Chinaei; Luc Lamontagne; François Laviolette; Richard Khoury

To support the personalization of Question Answering (QA) systems, we propose a new probabilistic scoring approach based on the topics of the question and candidate answers. First, a set of topics of interest to the user is learned based on a topic modeling approach such as Latent Dirichlet Allocation. Then, the similarity of questions asked by the user to the candidate answers, returned by the search engine, is estimated by calculating the probability of the candidate answer given the question. This similarity is used to re-rank the answers returned by the search engine. Our preliminary experiments show that the reranking highly increases the performance of the QA system estimated based on accuracy and MRR (mean reciprocal rank).


International Journal of Speech Technology | 2014

Dialogue POMDP components (Part II): learning the reward function

Hamidreza Chinaei; Brahim Chaib-draa

The partially observable Markov decision process (POMDP) framework has been applied in dialogue systems as a formal framework to represent uncertainty explicitly while being robust to noise. In this context, estimating the dialogue POMDP model components (states, observations, and reward) is a significant challenge as they have a direct impact on the optimized dialogue POMDP policy. Learning states and observations sustaining a POMDP have been both covered in the first part (Part I), whereas this part (Part II) covers learning the reward function, that is required by the POMDP. To this end, we propose two specific algorithms based on inverse reinforcement learning (IRL). The first is called POMDP-IRL-BT (BT for belief transition) and it approximates a belief transition model, similar to the Markov decision process transition models. The second is a point-based POMDP-IRL algorithm, denoted by PB-POMDP-IRL (PB for point-based), that approximates the value of the new beliefs, which occurs in the computation of the policy values, using a linear approximation of expert beliefs. Ultimately, we apply the two algorithms on healthcare dialogue management in order to learn a dialogue POMDP from dialogues collected by SmartWheeler (an intelligent wheelchair).


Computational Linguistics | 2017

Identifying and avoiding confusion in dialogue with people with alzheimer's disease

Hamidreza Chinaei; Leila Chan Currie; Andrew Danks; Hubert Lin; Tejas Mehta; Frank Rudzicz

Alzheimers disease (AD) is an increasingly prevalent cognitive disorder in which memory, language, and executive function deteriorate, usually in that order. There is a growing need to support individuals with AD and other forms of dementia in their daily lives, and our goal is to do so through speech-based interaction. Given that 33% of conversations with people with middle-stage AD involve a breakdown in communication, it is vital that automated dialogue systems be able to identify those breakdowns and, if possible, avoid them. In this article, we discuss several linguistic features that are verbal indicators of confusion in AD (including vocabulary richness, parse tree structures, and acoustic cues) and apply several machine learning algorithms to identify dialogue-relevant confusion from speech with up to 82% accuracy. We also learn dialogue strategies to avoid confusion in the first place, which is accomplished using a partially observable Markov decision process and which obtains accuracies (up to 96.1%) that are significantly higher than several baselines. This work represents a major step towards automated dialogue systems for individuals with dementia.


Archive | 2016

A Few Words on Topic Modeling

Hamidreza Chinaei; Brahim Chaib-draa

Topic modeling techniques are used to discover the topics for (unlabeled) texts. As such, they are considered as unsupervised learning techniques which try to learn the patterns inside the text by considering words as observations. In this context, latent Dirichlet allocation (LDA) is a Bayesian topic modeling approach which has useful properties particularly for practical applications (Blei et al. 2003). In this section, we go through LDA by first reviewing the Dirichlet distribution, which is the basic distribution used in LDA.


Archive | 2016

Sequential Decision Making in Spoken Dialog Management

Hamidreza Chinaei; Brahim Chaib-draa

This chapter includes two major sections. In Sect. 3.1, we introduce sequential decision making and study the supporting mathematical framework for it. We describe the Markov decision process (MDP) and the partially observable MDP (POMDP) frameworks, and present the well-known algorithms for solving them. In Sect. 3.2, we introduce spoken dialog systems (SDSs). Then, we study the related work of sequential decision making in spoken dialog management. In particular, we study the related research on application of the POMDP framework for spoken dialog management. Finally, we review the user modeling techniques that have been used for dialog POMDPs.


Archive | 2016

Application on Healthcare Dialog Management

Hamidreza Chinaei; Brahim Chaib-draa

In this chapter, we show the application of our proposed methods on healthcare dialog management (Chinaei et al. 2014).


Archive | 2016

Learning the Dialog POMDP Model Components

Hamidreza Chinaei; Brahim Chaib-draa

In this chapter, we propose methods for learning the model components of intent-based dialog POMDPs from unannotated and noisy dialogs.

Collaboration


Dive into the Hamidreza Chinaei's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milos Cernak

Idiap Research Institute

View shared research outputs
Top Co-Authors

Avatar

Elmar Nöth

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Julius Hannink

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raman Arora

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge