Ashutosh Modi
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashutosh Modi.
conference on computational natural language learning | 2014
Ashutosh Modi; Ivan Titov
Induction of common sense knowledge about prototypical sequence of events has recently received much attention (e.g., Chambers and Jurafsky (2008); Regneri et al. (2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated. We show that this approach results in a substantial boost in performance on the event ordering task with respect to the previous approaches, both on natural and crowdsourced texts.
conference on computational natural language learning | 2016
Ashutosh Modi
Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.
joint conference on lexical and computational semantics | 2017
Dai Quoc Nguyen; Dat Quoc Nguyen; Ashutosh Modi; Stefan Thater; Manfred Pinkal
Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.
joint conference on lexical and computational semantics | 2015
Rachel Rudinger; Vera Demberg; Ashutosh Modi; Benjamin Van Durme; Manfred Pinkal
The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).
north american chapter of the association for computational linguistics | 2012
Ashutosh Modi; Ivan Titov; Alexandre Klementiev
arXiv: Learning | 2013
Ashutosh Modi; Ivan Titov
language resources and evaluation | 2016
Ashutosh Modi; Tatjana Anikina; Simon Ostermann; Manfred Pinkal
north american chapter of the association for computational linguistics | 2018
Simon Ostermann; Michael Roth; Ashutosh Modi; Stefan Thater; Manfred Pinkal
language resources and evaluation | 2018
Simon Ostermann; Ashutosh Modi; Michael Roth; Stefan Thater; Manfred Pinkal
Transactions of the Association for Computational Linguistics | 2017
Ashutosh Modi; Ivan Titov; Vera Demberg; Asad B. Sayeed; Manfred Pinkal