Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elliot Moore is active.

Publication


Featured researches published by Elliot Moore.


IEEE Transactions on Biomedical Engineering | 2008

Critical Analysis of the Impact of Glottal Features in the Classification of Clinical Depression in Speech

Elliot Moore; Mark A. Clements; John W. Peifer; Lydia Weisser

The motivation for this work is in an attempt to rectify the current lack of objective tools for clinical analysis of emotional disorders. This study involves the examination of a large breadth of objectively measurable features for use in discriminating depressed speech. Analysis is based on features related to prosodics, the vocal tract, and parameters extracted directly from the glottal waveform. Discrimination of the depressed speech was based on a feature selection strategy utilizing the following combinations of feature domains: prosodic measures alone, prosodic and vocal tract measures, prosodic and glottal measures, and all three domains. The combination of glottal and prosodic features produced better discrimination overall than the combination of prosodic and vocal tract features. Analysis of discriminating feature sets used in the study reflect a clear indication that glottal descriptors are vital components of vocal affect analysis.


international conference on acoustics, speech, and signal processing | 2004

Algorithm for automatic glottal waveform estimation without the reliance on precise glottal closure information

Elliot Moore; Mark A. Clements

An automated glottal waveform estimation algorithm is presented that improves on a previous manual glottal extraction technique which produced excellent glottal waveform estimates. The algorithm uses only basic approximations of glottal closure regions and successive iterations to find the best candidate for a glottal waveform estimate within a speech frame. Visual comparisons of the glottal waveform estimates created by the algorithm and those generated from the use of glottal closure information provided by an electroglottograph (EGG) reveal that the algorithm produced virtually identical estimates.


international conference of the ieee engineering in medicine and biology society | 2004

Comparing objective feature statistics of speech for classifying clinical depression

Elliot Moore; Mark A. Clements; John W. Peifer; Lydia Weisser

Human communication is saturated with emotional context that aids in interpreting a speakers mental state. Speech analysis research involving the classification of emotional states has been studied primarily with prosodic (e.g., pitch, energy, speaking rate) and/or spectral (e.g., formants) features. Glottal waveform features, while receiving less attention (due primarily to the difficulty of feature extraction), have also shown strong clustering potential of various emotional and stress states. This study provides a comparison of the major categories of speech analysis in the application of identifying and clustering feature statistics from a control group and a patient group suffering from a clinical diagnosis of depression.


international conference of the ieee engineering in medicine and biology society | 2003

Investigating the role of glottal features in classifying clinical depression

Elliot Moore; Mark A. Clements; John W. Peifer; Lydia Weisser

Classifying emotion and emotion related disorders in the voice have often been studied utilizing prosodic (pitch, energy, speaking rate) and other spectral characteristics (formants, power spectral density) of the acoustic speech signal. Glottal waveform features have received little attention in the study of many emotion and emotion related disorders, but have shown strong correlations in a variety of speech pattern studies including speaker characterization and stress analysis. We employ glottal extraction techniques to obtain features related to timing, ratios, shimmer, and spectral characteristics of the glottal waveform in the study of clinical depression. Our study reports on several glottal waveform features that show very good separation among a control group and patient group of males and females suffering from a depressive disorder.


international conference on acoustics, speech, and signal processing | 2009

Investigating glottal parameters for differentiating emotional categories with similar prosodics

Rui Sun; Elliot Moore; Juan F. Torres

Speech prosodics (i.e., pitch, energy, etc.) play an important role in the interpretation of emotional expression. However, certain pairs of emotions can be difficult to discriminate due to similar displayed tendencies in prosodic statistics. The purpose of this paper is to target speaker dependent expressions of emotional pairs that share statistically similar prosodic information and investigate a set of glottal features for their ability to find measurable differences in these expressions. Evaluation is based on acted emotional utterances from the Emotional Prosody and Speech Transcript (EPST) database. While it is in no way assumed that acted speech provides a complete picture of authentic emotion, the value of this information is that the actors adjusted their voice quality to fit their perception of different emotions. Results show statistically significant differences (p ≪ 0.05) in at least one glottal feature for all 30 emotion pairs where prosodic features did not show a significant difference. In addition, the use of single glottal features reduced classification error for 24 emotion pairs in comparison to pitch or energy.


international conference of the ieee engineering in medicine and biology society | 2003

Analysis of prosodic variation in speech for clinical depression

Elliot Moore; Mark A. Clements; John W. Peifer; Lydia Weisser

Understanding how someone is speaking can be equally important to what they are saying when evaluating emotional disorders, such as depression. In this study, we use the acoustic speech signal to analyze variations in prosodic feature statistics for subjects suffering from a depressive disorder. A new sample database of subjects with and without a depressive disorder is collected and pitch, energy, and speaking rate feature statistics are generated at a sentence level and grouped into a series of observations (subset of sentences) for analysis. A common technique in quantifying an observation had been to simply use the average of the feature statistic for the subset of sentences within an observation. However, we investigate the merit of a series of statistical measures as a means of quantifying a subset of feature statistics to capture emotional variations from sentence to sentence within a single observation. Comparisons with the exclusive use of the average show an improvement in overall separation accuracy for other quantifying statistics.


affective computing and intelligent interaction | 2011

Investigating glottal parameters and teager energy operators in emotion recognition

Rui Sun; Elliot Moore

The purpose of this paper is to study the performance of glottal waveform parameters and TEO in distinguishing binary classes of four emotion dimensions (activation, expectation, power, and valence) using authentic emotional speech. The two feature sets were compared with a 1941-dimension acoustic feature set including prosodic, spectral, and other voicing related features extracted using openSMILE toolkit. The comparison work highlight the discrimination ability of TEO in emotion dimensions activation and power, and glottal parameters in expectation and valence for authentic speech data. Using the same classification methodology, TEO and glottal parameter outperformed or performed similarly to the prosodic, spectral and other voicing related features (i.e., the feature set obtained using openSMILE).


IEEE Transactions on Biomedical Engineering | 2017

Unobtrusive and Wearable Systems for Automatic Dietary Monitoring

Temiloluwa Prioleau; Elliot Moore; Maysam Ghovanloo

The threat of obesity, diabetes, anorexia, and bulimia in our society today has motivated extensive research on dietary monitoring. Standard self-report methods such as 24-h recall and food frequency questionnaires are expensive, burdensome, and unreliable to handle the growing health crisis. Long-term activity monitoring in daily living is a promising approach to provide individuals with quantitative feedback that can encourage healthier habits. Although several studies have attempted automating dietary monitoring using wearable, handheld, smart-object, and environmental systems, it remains an open research problem. This paper aims to provide a comprehensive review of wearable and hand-held approaches from 2004 to 2016. Emphasis is placed on sensor types used, signal analysis and machine learning methods, as well as a benchmark of state-of-the art work in this field. Key issues, challenges, and gaps are highlighted to motivate future work toward development of effective, reliable, and robust dietary monitoring systems.


international conference on acoustics, speech, and signal processing | 2008

A study of Glottal waveform features for deceptive speech classification

Juan F. Torres; Elliot Moore; Ernest Bryant

Previous work in detection of deceptive speech has largely focused on prosodic, vocal tract, and lexical features. Glottal waveform features have been shown to be useful discriminators for various types of speaker affect and warrant further study within the context of deception detection. This paper reports on speaker-dependent machine learning and feature selection experiments for classifying deceptive and non- deceptive speech using a large number of statistical features derived from the glottal waveform. We present current results comparing the classification performance and selected feature sets across 19 speakers from the Columbia-SRI-Colorado corpus of deceptive speech and discuss directions for future work.


Archive | 2007

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data

Juan F. Torres; Ashraf Saad; Elliot Moore

This paper builds on previous work in which a feature selection method based on Genetic Programming (GP) was applied to a database containing a very large set of features that were extracted from the speech of clinically depressed patients and control subjects, with the goal of finding a small set of highly discriminating features. Here, we report improved results that were obtained by applying a technique that constructs clusters of correlated features and a Genetic Algorithm (GA) search that seeks to find the set of clusters that maximizes classification accuracy. While the final feature sets are considerably larger than those previously obtained using the GP approach, the classification performance is much improved in terms of both sensitivity and specificity. The introduction of a modified fitness function that slightly favors smaller feature sets resulted in further reduction of the feature set size without any loss in classification performance.

Collaboration


Dive into the Elliot Moore's collaboration.

Top Co-Authors

Avatar

Mark A. Clements

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Juan F. Torres

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Sun

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John W. Peifer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lydia Weisser

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Farina

Georgia State University

View shared research outputs
Top Co-Authors

Avatar

Maysam Ghovanloo

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge