Snigdha Chaturvedi
University of Maryland, College Park
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Snigdha Chaturvedi.
meeting of the association for computational linguistics | 2014
Snigdha Chaturvedi; Dan Goldwasser; Hal Daumé
Instructor intervention in student discussion forums is a vital component in Massive Open Online Courses (MOOCs), where personalized interaction is limited. This paper introduces the problem of predicting instructor interventions in MOOC forums. We propose several prediction models designed to capture unique aspects of MOOCs, combining course information, forum structure and posts content. Our models abstract contents of individual posts of threads using latent categories, learned jointly with the binary intervention prediction problem. Experiments over data from two Coursera MOOCs demonstrate that incorporating the structure of threads into the learning problem leads to better predictive performance.
association for information science and technology | 2016
Kathy McKeown; Hal Daumé; Snigdha Chaturvedi; John Paparrizos; Kapil Thadani; Pablo Barrio; Or Biran; Suvarna Bothe; Michael Collins; Kenneth R. Fleischmann; Luis Gravano; Rahul Jha; Ben King; Kevin McInerney; Taesun Moon; Arvind Neelakantan; Diarmuid O'Seaghdha; Dragomir R. Radev; Clay Templeton; Simone Teufel
New scientific concepts, interpreted broadly, are continuously introduced in the literature, but relatively few concepts have a long‐term impact on society. The identification of such concepts is a challenging prediction task that would help multiple parties—including researchers and the general public—focus their attention within the vast scientific literature. In this paper we present a system that predicts the future impact of a scientific concept, represented as a technical term, based on the information available from recently published research articles. We analyze the usefulness of rich features derived from the full text of the articles through a variety of approaches, including rhetorical sentence analysis, information extraction, and time‐series analysis. The results from two large‐scale experiments with 3.8 million full‐text articles and 48 million metadata records support the conclusion that full‐text features are significantly more useful for prediction than metadata‐only features and that the most accurate predictions result from combining the metadata and full‐text features. Surprisingly, these results hold even when the metadata features are available for a much larger number of documents than are available for the full‐text features.
north american chapter of the association for computational linguistics | 2016
Mohit Iyyer; Anupam Guha; Snigdha Chaturvedi; Jordan L. Boyd-Graber; Hal Daumé
Understanding how a fictional relationship between two characters changes over time (e.g., from best friends to sworn enemies) is a key challenge in digital humanities scholarship. We present a novel unsupervised neural network for this task that incorporates dictionary learning to generate interpretable, accurate relationship trajectories. While previous work on characterizing literary relationships relies on plot summaries annotated with predefined labels, our model jointly learns a set of global relationship descriptors as well as a trajectory over these descriptors for each relationship in a dataset of raw text from novels. We find that our model learns descriptors of events (e.g., marriage or murder) as well as interpersonal states (love, sadness). Our model outperforms topic model baselines on two crowdsourced tasks, and we also find interesting correlations to annotations in an existing dataset.
Computer Graphics Forum | 2014
Snigdha Chaturvedi; Cody Dunne; Zahra Ashktorab; R. Zachariah; Ben Shneiderman
An important part of network analysis is understanding community structures like topological clusters and attribute‐based groups. Standard approaches for showing communities using colour, shape, rectangular bounding boxes, convex hulls or force‐directed layout algorithms remain valuable, however our Group‐in‐a‐Box meta‐layouts add a fresh strategy for presenting community membership, internal structure and inter‐cluster relationships. This paper extends the basic Group‐in‐a‐Box meta‐layout, which uses a Treemap substrate of rectangular regions whose size is proportional to community size. When there are numerous inter‐community relationships, the proposed extensions help users view them more clearly: (1) the Croissant–Doughnut meta‐layout applies empirically determined rules for box arrangement to improve space utilization while still showing inter‐community relationships, and (2) the Force‐Directed layout arranges community boxes based on their aggregate ties at the cost of additional space. Our free and open source reference implementation in NodeXL includes heuristics to choose what we have found to be the preferable Group‐in‐a‐Box meta‐layout to show networks with varying numbers or sizes of communities. Case study examples, a pilot comparative user preference study (nine participants), and a readability measure‐based evaluation of 309 Twitter networks demonstrate the utility of the proposed meta‐layouts.
conference on computational natural language learning | 2017
Haoruo Peng; Snigdha Chaturvedi; Dan Roth
Understanding stories – sequences of events – is a crucial yet challenging natural language understanding task. These events typically carry multiple aspects of semantics including actions, entities and emotions. Not only does each individual aspect contribute to the meaning of the story, so does the interaction among these aspects. Building on this intuition, we propose to jointly model important aspects of semantic knowledge – frames, entities and sentiments – via a semantic language model. We achieve this by first representing these aspects’ semantic units at an appropriate level of abstraction and then using the resulting vector representations for each semantic aspect to learn a joint representation via a neural language model. We show that the joint semantic language model is of high quality and can generate better semantic sequences than models that operate on the word level. We further demonstrate that our joint model can be applied to story cloze test and shallow discourse parsing tasks with improved performance and that each semantic aspect contributes to the model.
international conference on data mining | 2013
Snigdha Chaturvedi; Hal Daumé; Taesun Moon
This paper proposes a space-efficient, discriminatively enhanced topic model: a V structured topic model with an embedded log-linear component. The discriminative log-linear component reduces the number of parameters to be learnt while outperforming baseline generative models. At the same time, the explanatory power of the generative component is not compromised. We establish its superiority over a purely generative model by applying it to two different ranking tasks: (a) In the first task, we look at the problem of proposing alternative citations given textual and bibliographic evidence. We solve it as a ranking problem in itself and as a platform for further qualitative analysis of convergence of scientific phenomenon. (b) In the second task we address the problem of ranking potential email recipients based on email content and sender information.
national conference on artificial intelligence | 2016
Shashank Srivastava; Snigdha Chaturvedi; Tom M. Mitchell
international world wide web conferences | 2014
Snigdha Chaturvedi; Vittorio Castelli; Radu Florian; Ramesh Nallapati; Hema Raghavan
national conference on artificial intelligence | 2017
Snigdha Chaturvedi; Mohit Iyyer; Hal Daumé
national conference on artificial intelligence | 2016
Snigdha Chaturvedi; Shashank Srivastava; Hal Daumé; Chris Dyer