Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joonsuk Park is active.

Publication


Featured researches published by Joonsuk Park.


meeting of the association for computational linguistics | 2014

Identifying Appropriate Support for Propositions in Online User Comments

Joonsuk Park; Claire Cardie

The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument.1 This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL2, where the appropriate type of support is reason, evidence, and optional evidence, respectively3. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99%.


north american chapter of the association for computational linguistics | 2015

Conditional Random Fields for Identifying Appropriate Types of Support for Propositions in Online User Comments

Joonsuk Park; Arzoo Katiyar; Bishan Yang

Park and Cardie (2014) proposed a novel task of automatically identifying appropriate types of support for propositions comprising online user comments, as an essential step toward automated analysis of the adequacy of supporting information. While multiclass Support Vector Machines (SVMs) proved to work reasonably well, they do not exploit the sequential nature of the problem: For instance, verifiable experiential propositions tend to appear together, because a personal narrative typically spans multiple propositions. According to our experiments, however, Conditional Random Fields (CRFs) degrade the overall performance, and we discuss potential fixes to this problem. Nonetheless, we observe that the F1 score with respect to the unverifiable proposition class is increased. Also, semi-supervised CRFs with posterior regularization trained on 75% labeled training data can closely match the performance of a supervised CRF trained on the same training data with the remaining 25% labeled as well.


international conference on artificial intelligence and law | 2015

Toward machine-assisted participation in eRulemaking: an argumentation model of evaluability

Joonsuk Park; Cheryl L. Blake; Claire Cardie

eRulemaking is an ongoing effort to use online tools to foster broader and better public participation in rulemaking --- the multi-step process that federal agencies use to develop new health, safety, and economic regulations. The increasing participation of non-expert citizens, however, has led to a growth in the amount of arguments whose validity or strength are difficult to evaluate, both by the government agencies and fellow citizens. Such arguments typically neglect to provide the reasons for the conclusions and objective evidence for factual claims upon which the arguments are based. In this paper, we propose a novel argumentation model for capturing the evaluability of user comments in eRulemaking. This model is intended to be used for implementing automated systems to assist users in constructing evaluable arguments under online commenting environment for the benefit of quick feedback at a low cost.


meeting of the association for computational linguistics | 2017

Argument Mining with Structured SVMs and RNNs.

Vlad Niculae; Joonsuk Park; Claire Cardie

We propose a novel factor graph model for argument mining, designed for settings in which the argumentative relations in a document do not necessarily form a tree structure. (This is the case in over 20% of the web comments dataset we release.) Our model jointly learns elementary unit type classification and argumentative relation prediction. Moreover, our model supports SVM and RNN parametrizations, can enforce structure constraints (e.g., transitivity), and can express dependencies between adjacent relations and propositions. Our approaches outperform unstructured baselines in both web comments and argumentative essay datasets.


international joint conference on natural language processing | 2015

Automatic Identification of Rhetorical Questions

Shohini Bhattasali; Jeremy Cytryn; Elana Feldman; Joonsuk Park

A question may be asked not only to elicit information, but also to make a statement. Questions serving the latter purpose, called rhetorical questions, are often lexically and syntactically indistinguishable from other types of questions. Still, it is desirable to be able to identify rhetorical questions, as it is relevant for many NLP tasks, including information extraction and text summarization. In this paper, we explore the largely understudied problem of rhetorical question identification. Specifically, we present a simple n-gram based language model to classify rhetorical questions in the Switchboard Dialogue Act Corpus. We find that a special treatment of rhetorical questions which incorporates contextual information achieves the highest performance.


ACM Transactions on Internet Technology | 2017

Using Argumentative Structure to Interpret Debates in Online Deliberative Democracy and eRulemaking

John Lawrence; Joonsuk Park; Katarzyna Budzynska; Claire Cardie; Barbara Konat; Chris Reed

Governments around the world are increasingly utilising online platforms and social media to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. In this article, we show how the analysis of argumentative and dialogical structures allows for the principled identification of those issues that are central, controversial, or popular in an online corpus of debates. Although areas such as controversy mining work towards identifying issues that are a source of disagreement, by looking at the deeper argumentative structure, we show that a much richer understanding can be obtained. We provide results from using a pipeline of argument-mining techniques on the debate corpus, showing that the accuracy obtained is sufficient to automatically identify those issues that are key to the discussion, attracting proportionately more support than others, and those that are divisive, attracting proportionately more conflicting viewpoints.


technical symposium on computer science education | 2016

The Effects of Peer- and Self-assessment on the Assessors

Joonsuk Park; Kimberley Williams

Recently, there has been a growing interest in peer- and self-assessment (PSA) in the research community, especially with the development of massive open online courses (MOOCs). One prevalent theme in the literature is the consideration of PSA as a partial or full replacement for traditional assessments performed by the instructor. And since the traditional role of the students in assessment processes is the assessee, existing works on PSA typically focus on devising methods to make the grades more reliable and beneficial for the assessees. What has been missing in the picture is the assessor: How are those conducting peer- and self-assessment impacted by the process? This question has become relevant from educational perspective, because in PSA, the students take on the role of the assessor, as well. We present PSA as an active learning exercise for the assessors and examine its impact. For this, we incorporated PSA into a university-level Introduction to Natural Language Processing course consisting of more than 100 students and analyzed student surveys and exam results of peer-, self-, and no-assessment groups. The final exam performance suggests that PSA is helpful for learning, which is consistent with the student survey results. Also, students generally enjoyed conducting PSA.


annual meeting of the special interest group on discourse and dialogue | 2012

Improving Implicit Discourse Relation Recognition Through Feature Set Optimization

Joonsuk Park; Claire Cardie


digital government research | 2012

Facilitative moderation for online participation in eRulemaking

Joonsuk Park; Sally Klingel; Claire Cardie; Mary J. Newhart; Cynthia R. Farina; Joan-Josep Vallbé


language resources and evaluation | 2016

A Corpus of Argument Networks: Using Graph Properties to Analyse Divisive Issues.

Barbara Konat; John Lawrence; Joonsuk Park; Katarzyna Budzynska; Chris Reed

Collaboration


Dive into the Joonsuk Park's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katarzyna Budzynska

Cardinal Stefan Wyszyński University in Warsaw

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge