Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Janyce Wiebe is active.

Publication


Featured researches published by Janyce Wiebe.


empirical methods in natural language processing | 2005

Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis

Theresa Wilson; Janyce Wiebe; Paul Hoffmann

This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.


language resources and evaluation | 2005

Annotating Expressions of Opinions and Emotions in Language

Janyce Wiebe; Theresa Wilson; Claire Cardie

This paper describes a corpus annotation project to study issues in the manual annotation of opinions, emotions, sentiments, speculations, evaluations and other private states in language. The resulting corpus annotation scheme is described, as well as examples of its use. In addition, the manual annotation process and the results of an inter-annotator agreement study on a 10,000-sentence corpus of articles drawn from the world press are presented.


empirical methods in natural language processing | 2003

Learning extraction patterns for subjective expressions

Ellen Riloff; Janyce Wiebe

This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision.


Computational Linguistics | 2004

Learning Subjective Language

Janyce Wiebe; Theresa Wilson; Rebecca F. Bruce; Matthew Bell; Melanie J. Martin

Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.


international conference on computational linguistics | 2000

Effects of adjective orientation and gradability on sentence subjectivity

Vasileios Hatzivassiloglou; Janyce Wiebe

Subjectivity is a pragmatic, sentence-level feature that has important implications for text processing applications such as information extraction and information retrieval. We study the effects of dynamic adjectives, semantically oriented adjectives, and gradable adjectives on a simple subjectivity classifier, and establish that they are strong predictors of subjectivity. A novel trainable method that statistically combines two indicators of gradability is presented and evaluated, complementing existing automatic techniques for assigning orientation labels.


Computational Linguistics | 2009

Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis

Theresa Wilson; Janyce Wiebe; Paul Hoffmann

Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the words prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the systems ability to identify when an instance is neutral.


international conference on computational linguistics | 2005

Creating subjective and objective sentence classifiers from unannotated texts

Janyce Wiebe; Ellen Riloff

This paper presents the results of developing subjectivity classifiers using only unannotated texts for training. The performance rivals that of previous supervised learning approaches. In addition, we advance the state of the art in objective sentence classification by learning extraction patterns associated with objectivity and creating objective classifiers that achieve substantially higher recall than previous work with comparable precision.


north american chapter of the association for computational linguistics | 2003

Learning subjective nouns using extraction pattern bootstrapping

Ellen Riloff; Janyce Wiebe; Theresa Wilson

We explore the idea of creating a subjectivity classifier that uses lists of subjective nouns learned by bootstrapping algorithms. The goal of our research is to develop a system that can distinguish subjective sentences from objective sentences. First, we use two bootstrapping algorithms that exploit extraction patterns to learn sets of subjective nouns. Then we train a Naive Bayes classifier using the subjective nouns, discourse features, and subjectivity clues identified in prior research. The bootstrapping algorithms learned over 1000 subjective nouns, and the subjectivity classifier performed well, achieving 77% recall with 81% precision.


empirical methods in natural language processing | 2005

OpinionFinder: A System for Subjectivity Analysis

Theresa Wilson; Paul Hoffmann; Swapna Somasundaran; Jason Kessler; Janyce Wiebe; Yejin Choi; Claire Cardie; Ellen Riloff; Siddharth Patwardhan

OpinionFinder is a system that performs subjectivity analysis, automatically identifying when opinions, sentiments, speculations, and other private states are present in text. Specifically, OpinionFinder aims to identify subjective sentences and to mark various aspects of the subjectivity in these sentences, including the source (holder) of the subjectivity and words that are included in phrases expressing positive or negative sentiments.


meeting of the association for computational linguistics | 1999

Development and Use of a Gold-Standard Data Set for Subjectivity Classifications

Janyce Wiebe; Rebecca F. Bruce; Tom O'Hara

This paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using statistical techniques. Bias-corrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier.

Collaboration


Dive into the Janyce Wiebe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theresa Wilson

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Rebecca F. Bruce

University of North Carolina at Asheville

View shared research outputs
Top Co-Authors

Avatar

Carmen Banea

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Tom O'Hara

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lingjia Deng

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoonjung Choi

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge