Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Felt is active.

Publication


Featured researches published by Paul Felt.


north american chapter of the association for computational linguistics | 2015

Early Gains Matter: A Case for Preferring Generative over Discriminative Crowdsourcing Models

Paul Felt; Kevin Black; Eric K. Ringger; Kevin D. Seppi; Robbie Haertel

In modern practice, labeling a dataset often involves aggregating annotator judgments obtained from crowdsourcing. State-of-theart aggregation is performed via inference on probabilistic models, some of which are dataaware, meaning that they leverage features of the data (e.g., words in a document) in addition to annotator judgments. Previous work largely prefers discriminatively trained conditional models. This paper demonstrates that a data-aware crowdsourcing model incorporating a generative multinomial data model enjoys a strong competitive advantage over its discriminative log-linear counterpart in the typical crowdsourcing setting. That is, the generative approach is better except when the annotators are highly accurate in which case simple majority vote is often sufficient. Additionally, we present a novel mean-field variational inference algorithm for the generative model that significantly improves on the previously reported state-of-the-art for that model. We validate our conclusions on six text classification datasets with both human-generated and synthetic annotations.


conference on computational natural language learning | 2015

Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA

Paul Felt; Eric K. Ringger; Jordan L. Boyd-Graber; Kevin D. Seppi

Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence “confused”) supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.


linguistic annotation workshop | 2015

An Analytic and Empirical Evaluation of Return-on-Investment-Based Active Learning

Robbie Haertel; Eric K. Ringger; Kevin D. Seppi; Paul Felt

Return-on-Investment (ROI) is a costconscious approach to active learning (AL) that considers both estimates of cost and of benefit in active sample selection. We investigate the theoretical conditions for successful cost-conscious AL using ROI by examining the conditions under which ROI would optimize the area under the cost/benefit curve. We then empirically measure the degree to which optimality is jeopardized in practice when the conditions are violated. The reported experiments involve an English part-of-speech annotation task. Our results show that ROI can indeed successfully reduce total annotation costs and should be considered as a viable option for machine-assisted annotation. On the basis of our experiments, we make recommendations for benefit estimators to be employed in ROI. In particular, we find that the more linearly related a benefit estimate is to the true benefit, the better the estimate performs when paired in ROI with an imperfect cost estimate. Lastly, we apply our analysis to help explain the mixed results of previous work on these questions.


north american chapter of the association for computational linguistics | 2010

Parallel Active Learning: Eliminating Wait Time with Minimal Staleness

Robbie Haertel; Paul Felt; Eric K. Ringger; Kevin D. Seppi


language resources and evaluation | 2010

Tag Dictionaries Accelerate Manual Annotation.

Marc Carmen; Paul Felt; Robbie Haertel; Deryle Lonsdale; Peter McClanahan; Owen Merkling; Eric K. Ringger; Kevin D. Seppi


language resources and evaluation | 2010

CCASH: A Web Application Framework for Efficient, Distributed Language Resource Development.

Paul Felt; Owen Merkling; Marc Carmen; Eric K. Ringger; Warren Lemmon; Kevin D. Seppi; Robbie Haertel


language resources and evaluation | 2014

Evaluating machine-assisted annotation in under-resourced settings

Paul Felt; Eric K. Ringger; Kevin D. Seppi; Kristian Heal; Robbie Haertel; Deryle Lonsdale


language resources and evaluation | 2014

Momresp: A Bayesian Model for Multi-Annotator Document Labeling

Paul Felt; Robbie Haertel; Eric K. Ringger; Kevin D. Seppi


Archive | 2012

Improving the Effectiveness of Machine-Assisted Annotation

Paul Felt


international conference on computational linguistics | 2016

Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings.

Paul Felt; Eric K. Ringger; Kevin D. Seppi

Collaboration


Dive into the Paul Felt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin D. Seppi

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Robbie Haertel

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristian Heal

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Marc Carmen

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Owen Merkling

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey Lund

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Warren Lemmon

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge