Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huy V. Nguyen is active.

Publication


Featured researches published by Huy V. Nguyen.


north american chapter of the association for computational linguistics | 2015

Extracting Argument and Domain Words for Identifying Argument Components in Texts

Huy V. Nguyen; Diane J. Litman

Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays.


artificial intelligence in education | 2013

Prosodic Entrainment and Tutoring Dialogue Success

Jesse Thomason; Huy V. Nguyen; Diane J. Litman

This study investigates the relationships between student entrainment to a tutoring dialogue system and learning. By finding the features of prosodic entrainment which correlate with learning, we hope to inform educational dialogue systems aiming to leverage entrainment. We propose a novel method to measure prosodic entrainment and find specific features which correlate with user learning. We also find differences in user entrainment with respect to tutor voice and user gender.


artificial intelligence in education | 2013

Identifying Localization in Peer Reviews of Argument Diagrams

Huy V. Nguyen; Diane J. Litman

Peer-review systems such as SWoRD lack intelligence for detecting and responding to problems with students’ reviewing performance. While prior work has demonstrated the feasibility of automatically identifying desirable feedback features in free-text reviews of student papers, similar methods have not yet been developed for feedback regarding argument diagrams. One desirable feedback feature is problem localization, which has been shown to positively correlate with feedback implementation in both student papers and argument diagrams. In this paper we demonstrate that features previously developed for identifying localization in paper reviews do not work well when applied to peer reviews of argument diagrams. We develop a novel algorithm tailored for reviews of argument diagrams, and demonstrate significant performance improvements in identifying problem localization in an experimental evaluation.


north american chapter of the association for computational linguistics | 2016

Instant Feedback for Increasing the Presence of Solutions in Peer Reviews.

Huy V. Nguyen; Wenting Xiong; Diane J. Litman

We present the design and evaluation of a web-based peer review system that uses natural language processing to automatically evaluate and provide instant feedback regarding the presence of solutions in peer reviews. Student reviewers can then choose to either revise their reviews to address the system’s feedback, or ignore the feedback and submit their original reviews. A system deployment in multiple high school classrooms shows that our solution prediction model triggers instant feedback with high precision, and that the feedback is successful in increasing the number of peer reviews with solutions.


meeting of the association for computational linguistics | 2016

Context-aware Argumentative Relation Mining

Huy V. Nguyen; Diane J. Litman

Context is crucial for identifying argumentative relations in text, but many argument mining methods make little use of contextual features. This paper presents contextaware argumentative relation mining that uses features extracted from writing topics as well as from windows of context sentences. Experiments on student essays demonstrate that the proposed features improve predictive performance in two argumentative relation classification tasks.


workshop on innovative use of nlp for building educational applications | 2014

Improving Peer Feedback Prediction: The Sentence Level is Right

Huy V. Nguyen; Diane J. Litman

Recent research aims to automatically predict whether peer feedback is of high quality, e.g. suggests solutions to identified problems. While prior studies have focused on peer review of papers, similar issues arise when reviewing diagrams and other artifacts. In addition, previous studies have not carefully examined how the level of prediction granularity impacts both accuracy and educational utility. In this paper we develop models for predicting the quality of peer feedback regarding argument diagrams. We propose to perform prediction at the sentence level, even though the educational task is to label feedback at a multi-sentential comment level. We first introduce a corpus annotated at a sentence level granularity, then build comment prediction models using this corpus. Our results show that aggregating sentence prediction outputs to label comments not only outperforms approaches that directly train on comment annotations, but also provides useful information for enhancing peer review systems with new functionality.


intelligent tutoring systems | 2014

Classroom Evaluation of a Scaffolding Intervention for Improving Peer Review Localization

Huy V. Nguyen; Wenting Xiong; Diane J. Litman

A peer review system that automatically evaluates student feedback comments was deployed in a university research methods course. The course required students to create an argument diagram to justify a hypothesis, then use this diagram to write a paper introduction. Diagram and paper first drafts were both reviewed by peers. During peer review, the system automatically analyzed the quality of student comments with respect to localization (i.e. pinpointing the source of the comment in the diagram or paper). Two localization models (one for diagram and one for paper reviews) triggered a system scaffolding intervention to improve review quality whenever the review was predicted to have a ratio of localized comments less than a threshold. Reviewers could then choose to revise their comments or ignore the scaffolding. Our analysis of data from system logs demonstrates that diagram and paper localization models have high prediction accuracy, and that a larger portion of student feedback comments are successfully localized after scaffolded revision.


artificial intelligence in education | 2017

Iterative Design and Classroom Evaluation of Automated Formative Feedback for Improving Peer Feedback Localization

Huy V. Nguyen; Wenting Xiong; Diane J. Litman

A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback comments on the papers to the system, Natural Language Processing was used to automatically evaluate peer feedback quality with respect to localization (i.e., pinpointing the source of the comment in the paper being reviewed). These evaluations in turn triggered immediate formative feedback by the system, which was designed to increase peer feedback localization whenever a feedback submission was predicted to have a ratio of localized comments less than a threshold. System feedback was dynamically generated based on the results of localization prediction. Reviewers could choose to either revise their feedback comments to address the system’s feedback or could ignore the feedback. Our analysis of data from system logs demonstrates that our peer feedback localization prediction model triggered the formative feedback with high precision, particularly when peer feedback comments were written by college students. Our findings also show that although students often incorrectly disagree with the system’s feedback, when they do revise their peer feedback comments, the system feedback was successful in increasing peer feedback localization (although the sample size was low). Finally, while most peer comments were revised immediately after the system feedback, the desired revision behavior also occurred further after such system feedback.


artificial intelligence in education | 2013

Predicting Low vs. High Disparity between Peer and Expert Ratings in Peer Reviews of Physics Lab Reports

Huy V. Nguyen; Diane J. Litman

Our interest in this work is to automatically predict whether peer ratings have high or low agreement in terms of disparity with instructor ratings, using solely features extracted from quantitative peer ratings and text-based peer comments. Experimental results suggest that our model can indeed outperform a majority baseline in predicting low versus high rating disparity. Furthermore, the reliability of both peer ratings and comments (in terms of peer disagreement) shows little correlation to disparity.


the florida ai research society | 2016

Improving Argument Mining in Student Essays by Learning and Exploiting Argument Indicators versus Essay Topics.

Huy V. Nguyen; Diane J. Litman

Collaboration


Dive into the Huy V. Nguyen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenting Xiong

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Jesse Thomason

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge